week 3: P-Comp // Nitelite // Elevator Musings

this week I learned how to make an analog input using a photoresister, and mapped it to the brightness of the LED. I inverted the values so that the LED lights up brighter when the room is dark. I couldn’t quite figure out how to make the LED turn completely off (no matter how much light I pointed on the photoresister, the LED was somewhat lit, though dim).

 

OBSERVATION // Interactive Technology in Public:

I chose to observe elevators, as they are interactive vehicles that are used in nearly all public buildings in New York City (and some private ones, if it’s tenants are lucky).  Our city is known for it’s high-rises and “skyscrapers”, which allow for an insane number of people to inhabit and work on one tiny island – it’s elevators expand Manhattan’s real-estate vertically to enable billions of dollars of industry to operate on Wall Street, Midtown, and beyond, and for ITP kids to pursue their dreams on the 4th floor of 721 Broadway.

Elevator-riders at Tisch tend to subscribe to a certain degree of etiquette, but like most New Yorkers, are almost always in a rush.  My assumptions were that people who seemed to be in a hurry would resort to taking the stairs, but it seemed that once people made a decision to wait for or call the elevator, they would stubbornly stick to it, even if the ride seemed far away from arriving.

A recurring difficultly across the board – whether people were waiting for the elevator in the lobby, or inside the elevator, or waiting for it on a higher floor – was that the buttons lit up very dimly, and sometimes not at all. Most people would end up pushing the button multiple times, and new people arriving to wait were hesitant about whether they should be safe and push the button just for good measure, at the risk of seeming inpatient or rude. Also, when people were on floors between the lobby and highest floor, it was hard to tell which direction the elevator was traveling in, as the indicating arrows are on the inside of the door that slide open and aren’t clearly visible. This forced a lot of verbal communication and finger pointing signals (up or down) in order to ensure that people were traveling the right direction. These interactions would inevitably tack on a few seconds onto every floor the elevator stopped on; not the most efficient, but made for some nice eye contact and social connection.

The most efficient riders were those who paid close attention to the statuses above the elevators, which indicate which floor the elevator is on at any given time. This allowed for riders to assess the number of people waiting for a ride, against the estimated arrival of the next ride – the assumption being that waiting for the following ride = less fellow riders = a more direct and efficient ride to the destination floor. These strategic riders were almost always alone, as they were able to focus on the status of the elevators rather than being engaged in conversation (unless they were engaged with their phones). Another interesting distinction between strategic riders was that some chose to also monitor the status of the other two elevators across the lobby and run over at the opportune time, whereas as some chose to focus on just the two directly in front of them. Those who tended to stick to one side of the lobby seemed to be closer in proximity to their target elevators, or further ahead in the vague semblance of a line, so they had more of an investment in one side of the building.

My takeaway is that the Tisch elevators are truly subpar, and it’s surprising that there have been zero efforts to improve their efficiency since I attended classes as an undergrad here over 10 years ago.  It would be interesting to map out a scenario where all four elevators are overhauled and how – whether increasing their size would make things better or worse, (certainly the speed at which they travel would result in more efficiency), and how much time even simple solutions such as clearer up/down signifiers would collectively save for the Tisch community.

week 2 (P-Comp): LEDs

Animated GIFs - Find & Share on GIPHY

this was a challenging week for me because the last time I tried to learn about circuits was in an after school program in the 2nd grade. when I realized all the kids in the class were boys, my timidness got the best of me and I refused to go back. (yay Ayah Bdeir / littleBits!!) anyhow, better late than never.

for the exercise, I demonstrated my learnings by lighting up the LED through connecting a key to my keychain, so that when I remove the key, the LED turns off to indicate that it’s missing. seems simple enough, but it’s a big deal for me!

   

week 1 (P-Comp): what is interaction?

I’d loosely define physical interaction as a reciprocal activity that establishes or defines a relationship or engagement between more than one element in a space.

Breaking down Chris Crawford’s interpretation in my own words in the following steps:

  • Collecting a perspective
  • Interpreting or processing
  • Generating outcome dependent on input/perspective collected

The degree by which the interactivity is enhanced is dependent on the quality of the first 2 steps (collecting and interpreting a perspective) and the speed of the last (producing an outcome).

Good physical interaction is intriguing and stimulating – successful interaction might make someone see/think/hear/interpret/smell/feel/imagine things in a new way, as its feedback provides an additive or transformational layer.

I appreciate the important distinction between user interface and interactivity. breaking off the term “inter” of each, leaving “face” and “activity” brings to mind Bret Victor’s rant on the limitations of a screen/facade vs. the Chinese proverb, “I hear and I forget; I see and I remember; I do and I understand.”

An example of digital technology that isn’t interactive is a surveillance camera – although the visual outcomes depend on the collection and processing of images, there is no reciprocation. the camera is merely reacting to its surroundings and is not aware to modify its behavior based on anything it captures.

Readings:

 

week 1 (ICM): omg im coding!

WHAT an exciting weekend this was!

being that I’m 100% new to coding, this was truly a BIG moment for me.

I went ahead and rounded out the vibe by messing with the background values til I achieved the “look” I was going for:

Looking at my perfect circle in hot pink/orange, I realized it would be best to make a cat. so I shrunk the circle into a teeny tiny head and made a cat!

so I got a little carried away with the creation of this and forgot to fully document the process (lesson learned)! but anyhow, I initially regretted making her head so tiny because I had squint to keep making her, but I was too far in the process to start re-scaling and re-positioning all her elements. but I managed to turn this into an opportunity to create an environment for her so that she now has a moon (with craters that are all shaped the same because those arc()s are real tricky) and a sun to boot!

Generally, I found the web editor really easy to use, and it made everything feel cohesive. I felt safe to venture out and break the example codes and experiment. This draw function in particular reminded me a lot of my time using Premiere to affect scale/position/rotation etc by manually entering values. that being said, I definitely didn’t even try to get into rotate() because apparently it rotates based on the entire canvas rather than itself as an object. yikes. and I couldn’t quite figure out how to get the arc() I wanted for the tail and the craters, but I made do. it also took a little getting used to hitting the “play” button to see the outcome.

***************

I definitely see how computation applies to my background and interest in video editing, sound design / music editing, immersive (virtual/augmented/mixed/extended) realities and platforms, and cats.

As a video editor, I think that the magical human element and decision making process is essential to putting a meaningful cut together, but “happy accidents” have also been a big part of my creative process. I could see how computation could help generate those in a creative but controlled way, especially in a less traditionally narrative form, like with fashion commercials or video art. And then so many possibilities with generating graphics, animating them, and making them interactive.

But by moving beyond video into the immersive sphere, I’m increasingly interested in the relationship between physical and virtual spaces, and augmenting/enhancing virtual spaces with physical elements to heighten the experience. I can envision mapping some kind of defunct, tangible keyboard to one that reveals itself in a virtual space and creates beautiful notes that come alive and animate (perhaps even multi-user duets or symphonies can be created as well).

Moving further beyond, when it comes to thinking about computation for an immersive platform my brain implodes, so will have to think about that some other time.

This project, “City Symphonies – Westminster” by Mark McKeague is really cool!! I feel like these kinds of sound-based projects are not always pleasant to listen to, but this is nice. and I like the execution of the animated notes, mimicking traffic pattern.

And as for cats, I hope my computational drawing above speaks for itself.