An anti-harassment twitter bot that questions the offender about their decision to use inflammatory language.
I was inspired by a teacher friend of mine who uses a socratic technique when his students call each other names. “Why did you call her that? What does that mean?” I wanted to use a bot to initiate conversation (with the bot saying something like, “Why did you feel like using this word?”) in order to make people reflect on the language they use and how it affects others.
One of the bots that inspired this project was Kevin Munger’s anti-harassment bot geared towards racists. The constraints placed on the bot were smart. It was only looking for the n-word, only when combined with an @ reply, and it checked the user’s timeline for prior offenses. Other important measures were taken that wouldn’t be feasible for my project–manually inspecting the profiles, and manually checking to make sure the two users in the interaction aren’t friends. Furthermore, in order for Munger to run his experiment, he had to hide that fact that he was using bots. I had decided that I was going to be transparent about my bot’s bot status.
I also read studies that had already done textual analysis of slurs used on twitter, like this one. Reading studies like this were important, because I realized that there was no magical Markov chain that was going to help me identify harassment on twitter without false negatives/positives. Even human experts can’t agree on what constitutes harassment. Here’s a quote from the study I linked to: “In manually coding, analysts had considerable difficulty achieving inter-annotator agreement (the extent to which both annotators agreed on the meaning of the same tweet).”
There are some glaring problems with this bot. If I tweet out, “I’m so fucking mad, someone just called me a cunt,” then I would get flagged for using a combination of the words “fucking” and “cunt”. That’s what you get with contextless word counters. They can’t tell the difference between a complaint about getting harassed and actual harassment (like, “You’re a fucking cunt.”) Further more, I didn’t want to call out users on a tweet-by-tweet basis. I wanted to track their behavior over time. A person could have a hundred reasons for using the word “bitch” in a tweet. But it’s definitely fair to call them out if they’ve tweeted the word “bitch” a hundred different times, no matter the reason.
The Ideal Bot:
I’m calling my bot “GrammaBot” because I want it to be satirical, rather than preachy. GrammaBot will track the words “bitch” and “cunt” only, because I want it to be relatively limited in scope. It will also only look at users in the United States because people in the UK seem to have a very different relationship with the word “cunt”. If a single user says either of these words more than 4 times, the bot will mention them in a tweet and say some variation of, “You’ve said the word ‘cunt’ 5 times since [date]. GrammaBot is wondering why you keep saying that word!”
Here is the source code for my bot:
So far I’ve only tested my bot in the console log (so I don’t get immediately blocked by Twitter). This is a video of what it looks like when I run my code. Because I’m using a personal account I’m tweeting the term “blahblahblahblah” instead of “cunt” to test that everything works:
Right now I’m only tracking “cunt” instead of “bitch” and “cunt”, which I actually think is good because it’s limiting the amount of data that’s coming in. Unfortunately I haven’t been able to figure out how to simultaneously filter by keyword and location (the twitter api doesn’t allow you to use both parameters at once). The good news is that by not limiting by location, I’m not missing the offenders who don’t have locations associated with their accounts. I’m also limiting the search to non-retweets. I only want the bot to identify OC (Original Content (or, for the more puerile among us, Original “Cuntent”)). You can follow @Gramma_Bot to see the offenders’ messages and Gramma’s reponses!
UPDATE: On the same day Gramma Bot launched (March 25, 2017), the application’s writing privileges were revoked.
Before Gramma got shut down, a few funny things happened:
- The bot flagged itself as an offender so kept calling itself out for using the word cunt. This was a silly thing for me to forget to account for in the code, but it does play into the narrative of “LOOK AT WHAT GRAMMA BOT HAS BECOME”
2. An offender was amused/baited by the bot, responding with, “I work on those numbers, you cunt”
3. A crazy lady (whose account has just been suspended) who has devoted her twitter career to harassing the doctors and nurses who supposedly botched her breast augmentation surgery assumed that it was the nurses who created Gramma Bot to “bully” her.
4. Gramma Bot retweeted quite a few butt photos from “titty.me” so I manually got rid of those.
I’m now deciding whether to appeal twitter’s restriction and then neuter my bot so it follows twitter’s automation rules, create Gramma_Bot2 and inevitably get that account suspended, or simply let things be.
RIP Gramma Bot.