Physical Computing

SPHERE

For my final P-Comp / ICM project, I want to create a physical, interactive 360 video experience.

I love this weird transition that we’re at – the trends of building towards immersive virtual experiences, meanwhile the hardware is clunky, awkward, not easy to operate, and generally not sexy.  There is so much we don’t know yet about our new storytelling tools, and I want to explore what might be out there.

The idea is to use 2, possibly 3 projectors to map 360 video onto a giant round hanging lantern, where the viewer sticks her head instead while seated on a chair. There is a tiny camera mounted within, which is used to detect when the viewer blinks – a “long blink” (eyes closed for a second or more) triggers an edit, by either cutting to a different scene, or flipping the projected 360 video upside down.  There is a physical controller as well, with a “play”, “pause”, and “rewind” button, reminiscent of VCRs.

There are a lot of components involved, but I’ll be using OpenCV for blink detection, Arduino for the “VCR” controls, and MadMapper for projecting:

I tested out the projection with a smaller lantern and the effect is exactly what I hoped for, the video looks grainy and nostalgic in some way. This is the view from inside the sphere:

http://alpha.editor.p5js.org/mai/sketches/r1GZG_Ikz

https://alpha.editor.p5js.org/mai/sketches/rkdV0swkM


 

P-Comp // Wings!

This week Luna and I paired up for our Serial Communication assignment to make a pair of wings in P5.js that are controlled by a photoresistor:

It was a little tricky to get all the moving parts to work together (especially linking the serial port to our sketch), but Luna drew out the wings, I mapped out their up and down motion using a variable in P5, and we tackled mapping the wings’ path to the photoresistor / arduino together.

I also made a fading circle with the potentiometer 🙂

This feels like a breakthrough in physical computing for me because I’m finally able to incorporate digital content with these sensors, and I’ve never made anything on screen that’s truly interactive like this!

 

week 3: P-Comp // Nitelite // Elevator Musings

this week I learned how to make an analog input using a photoresister, and mapped it to the brightness of the LED. I inverted the values so that the LED lights up brighter when the room is dark. I couldn’t quite figure out how to make the LED turn completely off (no matter how much light I pointed on the photoresister, the LED was somewhat lit, though dim).

 

OBSERVATION // Interactive Technology in Public:

I chose to observe elevators, as they are interactive vehicles that are used in nearly all public buildings in New York City (and some private ones, if it’s tenants are lucky).  Our city is known for it’s high-rises and “skyscrapers”, which allow for an insane number of people to inhabit and work on one tiny island – it’s elevators expand Manhattan’s real-estate vertically to enable billions of dollars of industry to operate on Wall Street, Midtown, and beyond, and for ITP kids to pursue their dreams on the 4th floor of 721 Broadway.

Elevator-riders at Tisch tend to subscribe to a certain degree of etiquette, but like most New Yorkers, are almost always in a rush.  My assumptions were that people who seemed to be in a hurry would resort to taking the stairs, but it seemed that once people made a decision to wait for or call the elevator, they would stubbornly stick to it, even if the ride seemed far away from arriving.

A recurring difficultly across the board – whether people were waiting for the elevator in the lobby, or inside the elevator, or waiting for it on a higher floor – was that the buttons lit up very dimly, and sometimes not at all. Most people would end up pushing the button multiple times, and new people arriving to wait were hesitant about whether they should be safe and push the button just for good measure, at the risk of seeming inpatient or rude. Also, when people were on floors between the lobby and highest floor, it was hard to tell which direction the elevator was traveling in, as the indicating arrows are on the inside of the door that slide open and aren’t clearly visible. This forced a lot of verbal communication and finger pointing signals (up or down) in order to ensure that people were traveling the right direction. These interactions would inevitably tack on a few seconds onto every floor the elevator stopped on; not the most efficient, but made for some nice eye contact and social connection.

The most efficient riders were those who paid close attention to the statuses above the elevators, which indicate which floor the elevator is on at any given time. This allowed for riders to assess the number of people waiting for a ride, against the estimated arrival of the next ride – the assumption being that waiting for the following ride = less fellow riders = a more direct and efficient ride to the destination floor. These strategic riders were almost always alone, as they were able to focus on the status of the elevators rather than being engaged in conversation (unless they were engaged with their phones). Another interesting distinction between strategic riders was that some chose to also monitor the status of the other two elevators across the lobby and run over at the opportune time, whereas as some chose to focus on just the two directly in front of them. Those who tended to stick to one side of the lobby seemed to be closer in proximity to their target elevators, or further ahead in the vague semblance of a line, so they had more of an investment in one side of the building.

My takeaway is that the Tisch elevators are truly subpar, and it’s surprising that there have been zero efforts to improve their efficiency since I attended classes as an undergrad here over 10 years ago.  It would be interesting to map out a scenario where all four elevators are overhauled and how – whether increasing their size would make things better or worse, (certainly the speed at which they travel would result in more efficiency), and how much time even simple solutions such as clearer up/down signifiers would collectively save for the Tisch community.

week 2 (P-Comp): LEDs

Animated GIFs - Find & Share on GIPHY

this was a challenging week for me because the last time I tried to learn about circuits was in an after school program in the 2nd grade. when I realized all the kids in the class were boys, my timidness got the best of me and I refused to go back. (yay Ayah Bdeir / littleBits!!) anyhow, better late than never.

for the exercise, I demonstrated my learnings by lighting up the LED through connecting a key to my keychain, so that when I remove the key, the LED turns off to indicate that it’s missing. seems simple enough, but it’s a big deal for me!

   

week 1 (P-Comp): what is interaction?

I’d loosely define physical interaction as a reciprocal activity that establishes or defines a relationship or engagement between more than one element in a space.

Breaking down Chris Crawford’s interpretation in my own words in the following steps:

  • Collecting a perspective
  • Interpreting or processing
  • Generating outcome dependent on input/perspective collected

The degree by which the interactivity is enhanced is dependent on the quality of the first 2 steps (collecting and interpreting a perspective) and the speed of the last (producing an outcome).

Good physical interaction is intriguing and stimulating – successful interaction might make someone see/think/hear/interpret/smell/feel/imagine things in a new way, as its feedback provides an additive or transformational layer.

I appreciate the important distinction between user interface and interactivity. breaking off the term “inter” of each, leaving “face” and “activity” brings to mind Bret Victor’s rant on the limitations of a screen/facade vs. the Chinese proverb, “I hear and I forget; I see and I remember; I do and I understand.”

An example of digital technology that isn’t interactive is a surveillance camera – although the visual outcomes depend on the collection and processing of images, there is no reciprocation. the camera is merely reacting to its surroundings and is not aware to modify its behavior based on anything it captures.

Readings: