Dick Van Dyke is many things to many people: wife to Laura Petrie, wise cracking doctor detective, typecast father figure. For me, he will always be bert, the one man band, london street urchin, jack of all trades in Disney's Mary Poppins. [youtube]http://www.youtube.com/watch?v=KKTknLD9eWw&feature=youtube_gdata_player[/youtube] For my final ICM project, I want to electronically recreate the magic and whimsy of the one man band. I will not use advanced technology like the kinect, but rather use a basic dv camera to acquire an image and send it to my laptop, a projector to display the processed* image and allow the subject to interact with the code. equipment: one camera with adjustable exposure and focal length a tripod 2 stands and a pole green curtain projector one light source *one laptop running processing Speaker Using a fixed camera on a tripod I will acquire an image of a participant in front of a well light green screen. With proper distance I hope to capture the full body of the subject. The subject will see themselves on screen with button and interface overlays created by processing. I hope to key out the green color as well to add elements of absurdity. With PImage, processing allows for the introduction of alpha channels and chroma Keying. Without any cues, I hope the user will start to move on screen.  There will be Hot spots on the screen that trigger audio and visual events. This will be done using the minim (for audio) and OpenCV (for blob detection) libraries. I hope to complicate the action by creating a novel conditional system that creates unique sounds based on combinations of movements in the hotspots. That could create a larger range of instruments and visual events. I am interested seeing how without any instruction, people take to the interface and start creating music with their bodies. This is purposely crude. I do not want it to be an exact move for move sensitive system of sound creation. That may allow the user to shape how the instrument will sound and the music they can create.

use of button control for Boyd the Game

I will be using my button control layout I coded for the midterm to create different environments for the one man band artist and other visual events. It will allow me to have on the fly control of the image and what is being seen by the viewer. Both the user (by their gestures) and I (by mouse) would be able to access the menu, but the user can only do it by doing a combination move. I can also not use keyboard button control to activate the events and code for simple keyboard presses. I have two reasons why i want to create this piece. One, i want to see how users, with very little instruction, play within a system when they control the media created. I also want to use this opportunity to show the hidden aspects of code. On the outside, one man band can seem like an interactive music machine, but it also can be a device that records your movements for later analysis. I am very curious about this separation between public face and internal motives, that goes on with popular games like farmville and facebook. After the end of each session with a user, they will be graded with a fake score about some kind of personal trait: sexy, smart, clever, cool, strong etc. I would like to know if the user makes the mental leap that their actions with the games may have unintended consequences and interpretations of their experience.