Gramma Bot: Twitter Bot Final

Featured Posts, Twitter Bot Workshop

Proposal:
An anti-harassment twitter bot that questions the offender about their decision to use inflammatory language.

Description:
I was inspired by a teacher friend of mine who uses a socratic technique when his students call each other names. “Why did you call her that? What does that mean?” I wanted to use a bot to initiate conversation (with the bot saying something like, “Why did you feel like using this word?”) in order to make people reflect on the language they use and how it affects others.

Research:
One of the bots that inspired this project was Kevin Munger’s anti-harassment bot geared towards racists. The constraints placed on the bot were smart. It was only looking for the n-word, only when combined with an @ reply, and it checked the user’s timeline for prior offenses. Other important measures were taken that wouldn’t be feasible for my project–manually inspecting the profiles, and manually checking to make sure the two users in the interaction aren’t friends. Furthermore, in order for Munger to run his experiment, he had to hide that fact that he was using bots. I had decided that I was going to be transparent about my bot’s bot status.

I also read studies that had already done textual analysis of slurs used on twitter, like this one. Reading studies like this were important, because I realized that there was no magical Markov chain that was going to help me identify harassment on twitter without false negatives/positives. Even human experts can’t agree on what constitutes harassment. Here’s a quote from the study I linked to: “In manually coding, analysts had considerable difficulty achieving inter-annotator agreement (the extent to which both annotators agreed on the meaning of the same tweet).”

Finally, I wanted to see what other sorts of anti-harassment bots are out there. Even though there are quite a few, I had to restrict myself to bots whose code is written in javascript. This source code really jump-started my project. This is a simple bot created for a hackathon. It takes a whole slew of offensive terms and gives them different weights (for example, both “cunt” and the n-word are given weights of 3, while “fat” and “shit” are given weights of 1). If a user tweets something with a weight greater than 3, then the bot tweets out the user name and says “this comment has been marked as offensive and has been recorded.”

There are some glaring problems with this bot. If I tweet out, “I’m so fucking mad, someone just called me a cunt,” then I would get flagged for using a combination of the words “fucking” and “cunt”. That’s what you get with contextless word counters. They can’t tell the difference between a complaint about getting harassed and actual harassment (like, “You’re a fucking cunt.”) Further more, I didn’t want to call out users on a tweet-by-tweet basis. I wanted to track their behavior over time. A person could have a hundred reasons for using the word “bitch” in a tweet. But it’s definitely fair to call them out if they’ve tweeted the word “bitch” a hundred different times, no matter the reason.

The Ideal Bot:
I’m calling my bot “GrammaBot” because I want it to be satirical, rather than preachy. GrammaBot will track the words “bitch” and “cunt” only, because I want it to be relatively limited in scope. It will also only look at users in the United States because people in the UK seem to have a very different relationship with the word “cunt”. If a single user says either of these words more than 4 times, the bot will mention them in a tweet and say some variation of, “You’ve said the word ‘cunt’ 5 times since [date]. GrammaBot is wondering why you keep saying that word!”

Bot-in-progress:
Here is the source code for my bot:

So far I’ve only tested my bot in the console log (so I don’t get immediately blocked by Twitter). This is a video of what it looks like when I run my code. Because I’m using a personal account I’m tweeting the term “blahblahblahblah” instead of “cunt” to test that everything works:

Right now I’m only tracking “cunt” instead of “bitch” and “cunt”, which I actually think is good because it’s limiting the amount of data that’s coming in. Unfortunately I haven’t been able to figure out how to simultaneously filter by keyword and location (the twitter api doesn’t allow you to use both parameters at once). The good news is that by not limiting by location, I’m not missing the offenders who don’t have locations associated with their accounts. I’m also limiting the search to non-retweets. I only want the bot to identify OC (Original Content (or, for the more puerile among us, Original “Cuntent”)). You can follow @Gramma_Bot to see the offenders’ messages and Gramma’s reponses!

UPDATE: On the same day Gramma Bot launched (March 25, 2017), the application’s writing privileges were revoked. 

Before Gramma got shut down, a few funny things happened:

  1. The bot flagged itself as an offender so kept calling itself out for using the word cunt. This was a silly thing for me to forget to account for in the code, but it does play into the narrative of “LOOK AT WHAT GRAMMA BOT HAS BECOME”

2. An offender was amused/baited by the bot, responding with, “I work on those numbers, you cunt”

3. A crazy lady (whose account has just been suspended) who has devoted her twitter career to harassing the doctors and nurses who supposedly botched her breast augmentation surgery assumed that it was the nurses who created Gramma Bot to “bully” her.

4. Gramma Bot retweeted quite a few butt photos from “titty.me” so I manually got rid of those.

I’m now deciding whether to appeal twitter’s restriction and then neuter my bot so it follows twitter’s automation rules, create Gramma_Bot2 and inevitably get that account suspended, or simply let things be.

RIP Gramma Bot.

Final Project: The Water Synth

Featured Posts, Physical Computing

Description:
A waterfall that functions as a musical instrument. When the user passes their hand under the waterfall, notes will play. The note will be sustained as long as they keep their hand in the same position. It will be possible to play chords or intervals (multiple notes at once) using both hands.

Prototype:
Our first prototype was as low-fi as possible–we created a waterfall by pouring a bucket of water into a tupperware container with a slit in it. As users put their hand under the waterfall, Jarone played notes on a scale from his iPad.

 

The main questions we wanted to answer in the first round of user testing were, Is this fun? and, Is the system intuitive? The answer to our first question was that people found it fun and entertaining (though there was one user tester that was displeased about getting wet). As for the second question, when we didn’t explain that it was a musical instrument, there were a few people who just sat and stared at the waterfall without putting their hands in it. This can be solved in a number of ways (e.g., a title card with instructions, or using a peg that interrupts the stream so that a note is being played in the “starting position”, or having one of us fool around with it so the audience can see how it works, etc.)

We got really nice feedback over all, with users wanting us to experiment with sensors on the y-axis, include LEDs to light the waterfall, and create individual streams so it’s more obvious where the notes occur.

Laser Testing:
After watching some YouTube videos, Jarone and I were fairly certain we could fire lasers directly through the water stream to get a nice hot spot in the water basin below (the water refracts the light of the laser along its path). However, we didn’t realize that your average laser isn’t powerful enough to travel more than about 6 inches. Since we want our waterfall to be at least 2 feet tall, this presented a problem:


After testing, we realized that we aren’t going to be able to send the laser through the water’s path, so our plan is to plant the laser directly behind it.

System diagram:

       

Midi Communication:
We hooked up a midi jack and a photo sensor to an Arduino for a dry test:

 

We’ve succeeded in getting notes to play when the photocell is covered, but we’re still figuring out how to get one sustained note. Our next steps are to fix the code and do a wet test with a laser, water basin, and photo cell below it.

Beginning the waterfall:
We got a 400 gallon-per-hour pump that was too powerful for our initial set up, so the waterfall was shooting toward the wall instead of downward. Then when we added a valve to regulate the flow, it completely killed the water pressure so rather than a sheet we got a sad little stream.

Benedetta thinks the valve was probably rated at a much lower water pressure than the pump allows for. For the time being we’ll use individual streams for each note because producing a waterfall is harder than we expected. At this point, separate streams also makes more conceptual sense because it shows the user where each note occurs. A sheet of water gives the impression of a full range of notes you can slide between, instead of integer values that occur at arbitrary points.

Concurrency Test:
We were able to not just get one sustained note, but multiple notes at the same time! Jarone says the next thing we should do is put all the notes into an array to clean up the code.


The Second Prototype:
When putting together our second prototype, we struggled to find the right sensors. We first tried photocells, but their range of sensitivity was too small. Then we tried both infrared and ultrasonic distance sensors. We ended up using one of each in this second prototype (the IR sensor is the finicky one on the left):


The IR sensor was constantly giving out garbage readings, possibly because it was reflecting off the water in weird ways. The behavior of the ultrasonic sensor didn’t seem to be affected by the water. We decided that the ultrasonic sensor was the better, more reliable option.

Getting Ready for Final Class:
No luck pushing the notes into an array, but at least our wires are labeled.

labelled-wires
The third prototype:
For our final physical computing class, we really wanted to get four sensors working. Unfortunately the delays that we used to send and receive ultrasonic pings clogged up the entire system. The current state of our code only allows for two sensors. However, we made progress in terms of getting one sensor to play multiple notes, as seen in this video:


We’re still not sure the best way to “zhush” it up, other than by painting the pvc pipe. We still need to figure out the kind of enclosure we want to use for our circuit, and bring some dignity to the sad plastic basin. We also need to get a working system without using four different arduinos. But the prototype works!

The Enclosure:
We decided to make a wooden enclosure for the PVC pipe and water basin, and seal it with shellac in order to make it water resistant. Our friend Chester was extremely kind and made a model for us, so all Jarone and I had to do was figure out the measurements and assemble it.
  

Winter Show Highlights:
Jarone and I were fortunate enough to be selected for the ITP’s 2016 Winter Show! We got really amazing feedback. Even though many people needed coaxing to actually roll their sleeves up and get wet, they really enjoyed it once they did. The water synth was a hit with basically every demographic, from toddlers to senior citizens. We had a few professional musicians stop by and they had fun guessing what scale we used and creating quick compositions.

Source Code: