Make a Bot that Iterates, Enumerates or Exhausts

Twitter Bot Workshop

For this assignment I wanted to tweet out every last paragraph in a Dan Brown book. For those of you who don’t know, Dan Brown’s signature move is to end a paragraph with one-word italics:




This is a huge source of amusement for me that understand the appeal of Dan Brown books but think that his prose is garbage! At any rate, I had to change the parameters to “chapter endings” generally. This is because even though most paragraphs worked beautifully with the 140 character max, even Dan Brown will through in the occasional 4 sentence paragraph at the end of a chapter.

Unfortunately, I wasn’t able to finish getting the bot up and running in node. Using the crontab scheduler hasn’t worked for me so far, but when it does I’ll update this post.

Site-Specific Story

Collective Narrative

Assignment: Create a narrative experience centered around location.

For this assignment, the “site-specific” location I chose was the kitchen, where I recorded a kind of cooking show podcast episode. Full disclosure: I can barely cook, but I use the show as a framing device to talk about my abuela.

The podcast is about 40 minutes long, and I intend to update this blog post with a more comprehensive breakdown of what I’m talking about when. But if you’re not interested in the cooking process at all, the main storytelling part begins at 21:45 and continues through to the end.

Piecing It Together: Drawing Objects

Piecing It Together

Assignment: Select an oddly shaped object. Make one drawing of the object as you see it. Then, imagine how you would slice it into 2d pieces in order to recreate it, and draw samples of these slices.

Here is my wonky 3D rendering of a toy truck:

Here are the 2D slices. I split up the truck into four different sections–the wheel and axle, the front, the rear, and the staircase divider between the front and the wheel. If I’m correct, I could make a truck out of 18 pieces. This 18-piece count does not include the wheel and axel parts, or whatever I’d need to connect the wheels and axel to the body of the truck.

Making Bots with Tracery and CBDQ

Twitter Bot Workshop

Bot #1: German Word Bot
Inspired by Allison’s tangent on the phrase “basketball net” (the tangent being, there’s no rule in the English language as to why this is the conventional phrase. It could just as easily be “basket ballnet” or “basket ball net” or “basketballnet”). She mentioned German noun phrases, which I adore. So I made a quick bot that spat out English definitions of a few phrases.


Bot #2: Fun with “Alas”
This time I was inspired this tumblr floating around Facebook:

The concept is to take every instance of the word “Alas” in Shakespeare’s writings, but replace it with “Aw, shit”. I decided to take this a step further and replace “Alas” with “fuck”, “shit”, “ack”, and “Alice” (Because I love the idea that everybody’s just pissed off at some poor lady named “Alice”). I didn’t limit my quotations to Shakespeare because I wanted to have more variety in my Bot’s output. I also learned the syntax for getting the bot to remember the first expletive, and cap off the whole quotation with “Fucking [same expletive”].

Bot # 3: Bots Against Humanity
Though pretty simple, my favorite Bot is a critique against the game “Cards Against Humanity”. I know this is a much beloved game (and I used to enjoy playing it myself), but after reading a few compelling arguments against CAH (like this one), I realized that the game is actually awful.

The inspiration for this bot comes from some friends who use the “Rando Rule” whenever they play CAH. On every turn they add a random extra white card from the deck (“Rando’s deck”), and give “Rando” the black card if the random card gets chosen. Guess what? Rando has won several times, and usually finishes in the top half. This is Rando, if he were a Twitter Bot:

Help for this Bot came from JSON Against Humanity. Here’s my source code:


Posthumous Portraiture Exhibit

Collective Narrative

Visit the Posthumous Portraiture Exhibit at the American Folk Art Museum

We cannot help but hear them whisper through the years, “remember me”

Reading the exhibit’s description in the main hall, I was immediately made uncomfortable by this claim above. I’m sure the writer had benign intent, but projecting desires onto the dead rankles me. Dead children even more so. How arrogant to presume what these children would want. And frankly, the thought of these children peering at me from some alternate plane, longing, pleading to be remembered, is disturbing. I don’t believe in alternate planes, but I would hope that the people in them be blissfully unconcerned with the contents of museums here on earth.

I was off to a sour start, but this is not to say I didn’t take away anything valuable from the exhibit. In the paintings, much of the imagery was what you’d expect–birds both alive and dead, trees both alive and dead, drooping fauna, timepieces. However, the recurring image of a child missing a shoe was a sad/interesting way to show that they were no longer tethered to the earth. A non-recurring image that I thought was particularly affecting was that of a young boy tugging on a dog’s ear. Many of the portraits appear stiff (though it’s hard to blame the artists when they were literally drawing from corpses) but deciding to depict that slice of life was a strong choice.

Unknown Child Holding Doll and Shoe, Attributed to George G. Hartwell (1815–1901)

My favorite paintings were ones that showed some action taking place (like tugging a dog’s ear or batting a shuttlecock). I also appreciated when they presented artifacts alongside the painting; for example, the curators managed to get ahold of a few of the toys that were actually featured in one of the paintings. There was another portrait that was presented next to a daguerrotype of a woman holding that very portrait in her arms. These portraits weren’t made for a museum or a gallery–they were made for grieving families.

Installation view of the 19th-century posthumous paintings of Mary and Francis Wilcox, with the toys they’re pictured with (photo by Allison Meier)

There was something else on the information plaque that I didn’t mention before, but really brought it home for me:

We presume stoic acceptance [of the families] at a time when infant mortality was one in four [but] we cannot judge the depth of another’s pain from the remove of centuries.

I know I’ve had the misconception that people in the 1800s excelled at enduring these sorts of hardships; that they were inured to feelings of loss. But the fact is that these mothers and fathers grieved plenty. In Claudia Emerson’s book “Secure the Shadow” (for which the exhibit was named), she tells of a mother unable to part with her dead child for nine days. On the ninth day, they took the posthumous photograph. It’s wrong to think that the owners of such photos were steeled against death. To us, it might seem macabre to pose for a photo with your dead child, but it makes a lot more sense if there exist no other photos of the two of you together. But it’s still hard to imagine taking comfort in them. It was heartbreaking to see how a dead child could look so much like a sleeping one.

Charles Willson Peale’s portrait of his wife weeping over their daughter

Hourly Comic

Collective Narrative

Assignment: Every hour, stop and document what you’re doing at exactly that moment. Do this for an entire day.

I’m hosting my birthday party and my place tonight and this is the current state of my apartment:

In the past hour I’ve nudged a few pieces of trash nearer to the trash bin and feebly put some dishes in the sink. I am in serious need of pump up music, so I listen to the Indie classic, “Lisztomania”

Brunch with mom at our usual place, Cowgirl Seahorse. Mom has some self-professed verbal dyslexia so she usually calls it “Seagirl Cowhorse”, but today she gets it mostly right with “Cowgirl Seawhore”

I go to the supermarket to pick up beers for the party. Paradox of choice.


The first sound is me mopping a stubborn bike tire track. The second is me chasing a piece of dry spaghetti with my vacuum cleaner.



My party starts in 30 minutes but EVERYTHING LOOKS THE SAME WHYYYY

No one’s arrived yet so my boyfriend entertains me by juggling limes.

9:30 and 10:30pm
For my birthday I’m hosting a “Bad Movie Night”–a movie that is so awful it’s actually entertaining to watch. We’ve chosen “Garzey’s Wing”, a low-budget anime film. The characters are basically all voiced by the same two people, one of whom sounds like Lisa Kudrow on horse tranquilizers. This point in the evening I decided to cheat on the assignment and have some of my friends make sketches for me. The 9:30 sketch is by Brian Garvey, who’s trying to capture the anticipation of Garzey’s Wing (we still hadn’t started the movie). The 10:30 sketch is by Lindsey Daniels, who’s trying to capture our utter confusion (like, are they going to Gabajuju? or is a character named Gabajuju? or is Gabajuju a weapon?? etc.)

This is another one of my sketches. I don’t want to go into too much detail here, but basically my mom has asked me to store a monstrous chair in my apartment (wider than my couch and almost as long). I’m supposed to hold onto this thing until she makes room for it in her apartment. (I feel like I’m representing my mom in a bad light here and I just want to say that in all other respects she is an awesome lady.) Anyway, here’s the chair, and my friend Eddie sitting atop it.

This sketch is by my friend Rita. The aforementioned Lindsey introduced us to a reality show called “Solitary” where contestants are literally put in solitary confinement. People are disturbed by this and there is a mass exodus from my apartment. Lindsey is apologetic.

I own mugs with feminist messages written in French (a fact unsurprising to anyone who knows me). I don’t really get the full meaning of “Femme de l’etre” though. At any rate, by 1:30am everyone’s left and Max and I are drinking tea.

Max has fallen asleep. This is what the apartment looks like post-party.

Final Thoughts:
When Marianne gave us this assignment, I was worried that this exercise in introspection would drive me irreversibly to madness. But I actually liked making little doodles all day long. I don’t think that the assignment really changed the course of my day, because I was going to be eating, shopping, cleaning and partying regardless. Furthermore, the assignment was less disruptive than I thought it would be. In the morning I was worried that every time I stopped to document something I would lose all of my cleaning momentum, but actually I think the breaks were somewhat restorative and helped propel me toward my goal. As an aside, I don’t really enjoy taking photographs in my day to day life (you’ll note there are no actual photographs from the party), but having my friends create a few sketches gave me nice momentos from the evening that I wouldn’t have otherwise had.


Making a Scene in Unity


I decided to make a scene about a crazy cat lady. My concept was to have her crawl around her apartment on all fours making meowing noises whenever she went inside a cardboard box.

The learning curve was steep. I spent many hours trying to get her out of the goddamn floor:


Eventually I realized that I needed to add humanoid rigging to each individual animation. With the help of a vertical input, I managed to get her crawling in place!

I then added a horizontal input and decided to make her horizontal movements a “zombie crawl” for variety’s sake. Getting the character’s body to move wasn’t difficult, but it looked really wonky because the crawling animations I downloaded from Mixamo only went in one direction (so her arms and legs would claw forwards but her whole body would move sideways). I found some code that allowed me to rotate the character in the direction I wanted, but I ran into a problem when I attached the main camera to the character:

The character was impossible to control from this perspective. I realized a limitation of the rotation code I pulled is that each arrow key is paired with a cardinal direction. In other words, hitting the left arrow key only ever lets the character face West. It doesn’t make the character “turn to the left”.

Triggering a sound event was surprisingly challenging. After a barrage of error messages, I discovered I was using an outdated API that was no longer compatible with Unity 5. But even after using the correct function (GetComponent) I was totally unable to get my character to trigger the sound. Eventually I noticed that I had set the scale height and radius of my character controller to zero (because it was weirdly making my character levitate, as seen in the video above), so there was nothing “colliding” with my collision box. I made the adjustments, and the heavenly meows issued forth. Here is the scene in its current state:

Things I would like to adjust in the future:

  1. Fix character rotation and make the camera angle 3rd person perspective
  2. It would be way funnier if instead of using an actual cat meow for my sound clip, I recorded myself making a bunch of different meowing noises
  3. More boxes!

Final Project: The Water Synth

Featured Posts, Physical Computing

A waterfall that functions as a musical instrument. When the user passes their hand under the waterfall, notes will play. The note will be sustained as long as they keep their hand in the same position. It will be possible to play chords or intervals (multiple notes at once) using both hands.

Our first prototype was as low-fi as possible–we created a waterfall by pouring a bucket of water into a tupperware container with a slit in it. As users put their hand under the waterfall, Jarone played notes on a scale from his iPad.


The main questions we wanted to answer in the first round of user testing were, Is this fun? and, Is the system intuitive? The answer to our first question was that people found it fun and entertaining (though there was one user tester that was displeased about getting wet). As for the second question, when we didn’t explain that it was a musical instrument, there were a few people who just sat and stared at the waterfall without putting their hands in it. This can be solved in a number of ways (e.g., a title card with instructions, or using a peg that interrupts the stream so that a note is being played in the “starting position”, or having one of us fool around with it so the audience can see how it works, etc.)

We got really nice feedback over all, with users wanting us to experiment with sensors on the y-axis, include LEDs to light the waterfall, and create individual streams so it’s more obvious where the notes occur.

Laser Testing:
After watching some YouTube videos, Jarone and I were fairly certain we could fire lasers directly through the water stream to get a nice hot spot in the water basin below (the water refracts the light of the laser along its path). However, we didn’t realize that your average laser isn’t powerful enough to travel more than about 6 inches. Since we want our waterfall to be at least 2 feet tall, this presented a problem:

After testing, we realized that we aren’t going to be able to send the laser through the water’s path, so our plan is to plant the laser directly behind it.

System diagram:


Midi Communication:
We hooked up a midi jack and a photo sensor to an Arduino for a dry test:


We’ve succeeded in getting notes to play when the photocell is covered, but we’re still figuring out how to get one sustained note. Our next steps are to fix the code and do a wet test with a laser, water basin, and photo cell below it.

Beginning the waterfall:
We got a 400 gallon-per-hour pump that was too powerful for our initial set up, so the waterfall was shooting toward the wall instead of downward. Then when we added a valve to regulate the flow, it completely killed the water pressure so rather than a sheet we got a sad little stream.

Benedetta thinks the valve was probably rated at a much lower water pressure than the pump allows for. For the time being we’ll use individual streams for each note because producing a waterfall is harder than we expected. At this point, separate streams also makes more conceptual sense because it shows the user where each note occurs. A sheet of water gives the impression of a full range of notes you can slide between, instead of integer values that occur at arbitrary points.

Concurrency Test:
We were able to not just get one sustained note, but multiple notes at the same time! Jarone says the next thing we should do is put all the notes into an array to clean up the code.

The Second Prototype:
When putting together our second prototype, we struggled to find the right sensors. We first tried photocells, but their range of sensitivity was too small. Then we tried both infrared and ultrasonic distance sensors. We ended up using one of each in this second prototype (the IR sensor is the finicky one on the left):

The IR sensor was constantly giving out garbage readings, possibly because it was reflecting off the water in weird ways. The behavior of the ultrasonic sensor didn’t seem to be affected by the water. We decided that the ultrasonic sensor was the better, more reliable option.

Getting Ready for Final Class:
No luck pushing the notes into an array, but at least our wires are labeled.

The third prototype:
For our final physical computing class, we really wanted to get four sensors working. Unfortunately the delays that we used to send and receive ultrasonic pings clogged up the entire system. The current state of our code only allows for two sensors. However, we made progress in terms of getting one sensor to play multiple notes, as seen in this video:

We’re still not sure the best way to “zhush” it up, other than by painting the pvc pipe. We still need to figure out the kind of enclosure we want to use for our circuit, and bring some dignity to the sad plastic basin. We also need to get a working system without using four different arduinos. But the prototype works!

The Enclosure:
We decided to make a wooden enclosure for the PVC pipe and water basin, and seal it with shellac in order to make it water resistant. Our friend Chester was extremely kind and made a model for us, so all Jarone and I had to do was figure out the measurements and assemble it.

Winter Show Highlights:
Jarone and I were fortunate enough to be selected for the ITP’s 2016 Winter Show! We got really amazing feedback. Even though many people needed coaxing to actually roll their sleeves up and get wet, they really enjoyed it once they did. The water synth was a hit with basically every demographic, from toddlers to senior citizens. We had a few professional musicians stop by and they had fun guessing what scale we used and creating quick compositions.

Source Code:

Animating in After Effects


For this assignment, I decided to do an animated adaptation of Matt Getty’s short story When My Girlfriend Lost the Weight. I decided I wanted to use individual parts of the body as focal points for each scene, and superimpose them onto a wooden manikin.  I was interested in contrasting a living, moving body with a blank, hard surface.

One of the things that I appreciate about the story is how as the narrator’s girlfriend experiences body dysmorphia, the story’s structure becomes more surreal and grotesque. I wish I could have done a better job of incorporating this into my own animation–I think it’s better when this kind of imagery sneaks up on the audience, rather than slaps them in the face. Even though this may lose me points for subtlety, I’m glad I took a risk and tried to tell this story.

Stop-Motion Animation


Assignment: Make a 15-30 second stop-motion video.

The advice our professor gave was to think of a problem that can be solved in under thirty seconds. The first thought that came to mind was tangled shoelaces. So our group decided to animate one shoe attempting to untangle another shoe’s laces. Anthropomorphizing shoes seemed like a fun challenge.

We ran wire through the laces in order to manipulate them frame by frame. However, we realized halfway through shooting that actually getting the laces untangled would be a nightmare. So the project veered in an amusing but NSFW direction. Here is the final product: