Archive

Fall 2012

Group Members: Simone Roth , Kelly Truong, Jeffrey Leung

For this project, we couldn’t decide on an idea so we decided to fuse aspects of each of our proof of concepts together. Hats from mine, falling snow from Simone’s, and split screen from Kelly’s. Since we are not experts with processing and its integration with kinect motion detection, we didn’t want to do something that was too hard for us to handle.


^Please watch in 720p HD

Thus, we created a fun interactive piece that places a hat on the subject’s head. We added a feature where every time the subject claps his / her hands, the hat changes randomly. With 14 different hats stored, the subject can browse through them and do funny poses accordingly.

DSCF14412

Our code uses OpenNI and Kinect to detect the subject’s head and body. We used arrays to organized and displays the hats. If statements were used to determine what happens when the subject claps his or her hands together. Also, we used if statements to determine when the subject is on the left or right side of the screen to activate the snowfall. We made it so the snowflakes fall at a random speed and location so it is more realistic. Sometimes the subject’s skeleton when calibrated is a bit off to the side, and sometimes it’s spot on. We don’t know why there is such inconsistency with the outcome, but we tried to adjust the positioning of the hat accordingly so it will somewhat fit properly on the subject’s head.

DSCF144124

We had the idea to do only do winter themed hats which will go well with snowflakes falling, but we found that there are very limited amount of winter-hats. Since we already created the code for the falling snowflakes, we wanted to keep it but only activate the snow when the subject is on one side of the screen.

DSCF144123 We wanted to do something that is commercial, and that will appeal to the masses. We want our idea to be something you can see at a local store or mall, not specifically at an art gallery. At a grander scale, we would hope to have more than 50 different hats, and have alignment flawlessly executed every time.

DSCF1441

We had a lot of fun creating this and will most definitely explore into Kinect in Processing for future projects.

Advertisements

SixthSense is a wearable gestural interface developed by Pranav Mistry that augments the physical world around us with digital information and lets us use natural hand gestures to interact with the information.

The SixthSense prototype comprises a pocket projector, a mirror and a camera contained in a device worn around the user’s neck. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks users’ hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces.

For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch. The current prototype system costs approximate $350 to buildSixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘


Machinima is a method of making animated film using software similar to that designed for making video and computer games. One of the earliest examples of machinima is Red vs. Blue, a set of related comic science-fiction video series created by Rooster Teeth Productions using the game series Halo. Red vs. Blue tells the story of two groups of soldiers fighting a civil war in a desolate box canyon.

The videos were primarily produced using traditional machinima techniques of synchronizing video footage from a game to pre-recorded dialogue and other audio. Within a multiplayer game session, the people controls their characters like puppets, moving them around, firing weapons, and performing other actions as dictated by the script, and in synchronization with the episode’s dialogue, which is recorded ahead of time. The “camera” is simply another player, whose first-person perspective is recorded raw to a computer. In late 2009, animator Monty Oum was hired to work on pre-rendered character animations to achieve action scenes or character movements in Red vs. Blue that are unable to be done using just the Halo engine. Season eight of Red vs. Blue is the first season of the series to make extensive use of animation, and soon became the process of creating the complete animated series with no machinima elements.

The series quickly achieved significant popularity following its internet premiere on April 1, 2003 receiving 20,000 downloads in a single day. Red vs. Blue has been well-received within the machinima community as well as among film critics. Praised for its originality, the series has won four awards at film festivals held by the Academy of Machinima Arts & Sciences. It has been credited with bringing new popularity to machinima, helping it to gain more mainstream exposure and attracting more people to the art form. Red vs. Blue has been mentioned as the most successful example of the trend toward serial distribution, a format that allows for gradual improvement as a result of viewer feedback, giving viewers a reason to return for future videos. The model played a huge part in the series’ success. People knew Monday nights as Red vs. Blue night when a new episode is released.

The object that I will be projecting on is my family’s white Mercedes Benz. I want my piece to showcase the evolution of transportation throughout history. I plan to create a hand drawn animation of a boy travelling along the different
lines and crevasses that define the shape of the car. The method of transport evolves as the boy continues his long journey around the car. Starting with the earliest forms of conveyance like crawling and walking and gradually building towards technological creations that helped mobilize our race like bicycles and trains. The different aspects of the car will represent different terrains and obstacles the boy will face. For example, the shape of the wheel is a big hill which he must climb; and the windows represent water he must pass. The animation will close with the boy reaching the final destination at the Mercedes Benz logo at the very front, ending as the actual car starts and the lights lit up. This idea was inspired by my life as a commuter living uptown studying downtown and the number of different methods of travel I have experienced in the past years. This piece will be video documented and shown in class. The image is just a rough sketch and is subject to change upon more research and brainstorming.

~~~APPROACH~~~
Since I am new to processing and is not at the comfortable level to swim in deep waters, I decided to not do anything too complicated I can not handle. I knew I wanted to film the footage straight on, so I didn’t have much opportunity to play around skewing the proportions of a video to fit a certain space. However, surface mapping really helped me with adjusting the size of the videos to fit with the background of each scene.



Karolina Sobecka is a Polish artist that works with animation, design, interactivity, computer games and other media. Her work often engages public space and explores ways in which we interact with the world we create. Sniff is an interactive projection in a storefront window. When motion is detected along the sidewalk in front of the display, a virtual dog appears and responds the person’s behavior and gestures. The passerby’s movements are tracked by a computer vision system, and the dog behaves differently depending on how he is engaged. Like a real canine, big swift actions are interpreted as threatening, while slow and friendly actions directed to him are interpreted as friendly. He tracks and remembers the attitude of the viewer and forms a relationship with them over time based on the history of interaction. Depending on the nature of the relationship, he may bark, growl, roll over or even play fetch.

The installation is created with Unity3d Game Engine which renders the dog and makes it change its behavior based on tracking data. Infrared-sensitive cameras are used to detect movements of passer-by in front of the display window. Sniff explores the engagement between two different planes of understanding and the relationships that emerge. The experience is familiar yet strange, leading us to re-examine the notions we take for granted. The dog’s behavior represents the processes of assessment, evaluation and testing that are performed every time anything new enters our lives.


Philippe Blanchard is a Canadian artist, animator, teacher and curator. He is widely known for his unique fusion of animation, installation, light shows, drawing, painting and printmaking. In Time Tunnel, Blanchard created an experience that transcends time and space. An kinaesthetic experience in a parallel universe with inspirations from light shows, rock concert visuals and raves, while fusing popular notions of human prehistory, psychedelia and early 60s Happenings. Nothing is really moving, but it appears to be. Overhead projectors were upgraded and programmed to emit sequenced projections. With eight RGB strobe lights projected at the graphically-complex patterns, the designs undulate and appears as if it became alive. According to Blanchard, the designs are draw from images of cables and wires, they form seamless pathways, so they connect back and on top of each other. The lights were designed to detect the tempo of sounds or music and change the speed of the light sequence according the the beat. The overhead projectors, strobe lights, wall and floor designs, all appear to communicate with each other to produce an effect that is otherworldly.

Time Tunnel – An Animated Light Show by Philippe Blanchard from Philippe Blanchard on Vimeo.

Botanicalls is an innovative system that opens a new channel of communication between nature and humans. Initially developed in 2006 by NYC graduates, Rob Faludi, Kate Hartman, and Kati London, Botanicalls allows plants in need of water to make phone calls to owners asking for exactly what it needs. Vice versa, owners can phone their plants to check the status, moisture levels, temperatures, and botanical characteristics. This enhances the connection between people and plants, in an effort to promote successful inter-species understanding and remind us the importance of natural life in this digital age. Advanced to accommodate the ever-changing modern technologies, Botanicalls now allows plants to text, email or tweet their needs online.

Sensor probes placed deep into the soil measures the amount of moisture present. The readings are sent to a microcontroller built into the unit that translates the data into information that can be sent over the internet through an embedded Ethernet connection. The information is then sent to the human’s phone, email or Twitter account. These visual and aural reminders will help people who are unsure of their ability to effectively care for growing plants. Through this innovative technology, a plant will never die prematurely again.