KINECT sensor for tracking people and manipulate shadows

he folks!

the following is supposed to be a short description of a student project and I ask you folks to give us some hints how to start and technically implement the project. I would be very happy if you do so. THANKS :)

we are two students from germany and came up with a nice idea within a cooperation project of information scientists and architects.

SHADOW is what it’s all about.
we want to imitate and manipulate the shadows of people. to do so, we have a microsoft KINECT, which is supposed to track the people passing by.
a projector throws there calculated shadows on the wall. the shadows can then be slower, smaller, time-shifted…whatever…

furthermore, we’d like to implement some additional objects, people can interact with.
this means in detail, that we’d like to put additional shadows on the wall, which, in difference to the people’s shadow, have no real counterpoint.

so what we basically have:
INPUT -> kinect signal
OUTPUT -> people’s shadows (from kinect input)
OUTPUT -> other shadows (from graphics png,gif,eps,flash,… and videos flash,mov,avi,…)

today, i’ll get the KINECT. and here’s a list of questions that came up by thinkging about the implementatin:

  • how do I make the kincet input look nice and smooth?
    because object borders are quiet rough as far as I know.
    I will need to run some image processing on the input data.
  • which format actually has the kinect input? video?
  • to manipulate the shadows, make them slower or time-shifted, I’ll need to make image/video processing again.
    maybe I’ll need to store the input data for a while to output the shadows a little later, so that they are not in real-time.
  • how do I include additional objects, e.g. flash animations?
    to include other shadow objects in my output, I’ll need the possibility to handle and combine multiple input sources to one output stream.
  • what’s hot and top for interaction functionality?
    for interaction I’ll need collision detection and all those fancy techniques…

any suggestions for LIBRARIES, FRAMEWORKS, PLUGINS or DRIVERS are very welcome!
this would help me most, because it gets me started and time is short, unfortunately!

as candy for you folks, I promise to post a movie of the making-of and the exhibition it’s going to be set up for.

I hope you have something for us and thanks for sharing!

felix from dresden

hey im working on something similar, if you have the rigth ligthing yo dont really need a kinect, but if you already ordered im sure you wont regreat, its a great peace of hardware for many proyect

to use the kinect

regarding your proyect, i guess that you want to be “casual” so people dont need to do the “pose”. i would recomend to get use to the Pipet node and to Contour node, and, well that all that you need to create the interactions

regarding the quality, i dont see a very good “workaround” on that, i guess that a blur would be nice and a lot of contrast over the blured video

the kinect will give you a texture, if you remove the background ( already integrated in the kinect node ) you will get each character with a diferent color, so the pipet node can be use to evaluate a single pixel as a triger.

you need to get familiarized with vvvv, i think that i only know 2% of this wonderfull toolkit and with time, well, i think there is no limit on what you can do.

you can add a delay to the video using a queue node, check the attachment, and good luck! (4.6 kB)

in the last release which is great
we miss the render background option!

there is one pin that say background, if you turn it of it leaves only the people without background, and with render id you can chose to have them in diferent colors or in grayscale.

thanks for the worthful hints, vjc4!
i’ll hang in there and try to get things running with v4!

the MS kinect SDK came out, so another player is in the game.

right now I’m still trying to figure out the best way to realize my project idea. because time is short, I dont want to end up on the wrong way.
my other thoughts are:

1.) openFrameworks + Processing => OF project with addon ofxKinect (libfreenect drivers) and OSC (Open Sound Control) to communicate with Processing, i.e. to pass the kinect signal over to Processing. OSC is available as addon for both OF and Processing.

2.) MS kinect SDK => too heavy in my opinion. I’ll need results soon and I am not too experienced in C++.

for now, it’s vvvv’s turn. so far :)

kinect is running, I see myself deeply ;)

can I set a certain depth, so that only objects in a distance from 1 to 2 metres get tracked?

I also need objects to interact with, i.e. like including a black drum from png, that makes sound when a person’s arm hits the object. actually, it does not really have to be the arm, because i dont want to get too complicated and use skeletons, it might be enough to play sound when some part of my silhouette hits the drum graphic.

I’m still watching the video tutorials. not too experienced yet, therefore my questions.

thanks :)

remove the background
Pipet node + hsl split ( luma value ) + change = thats your trigger when somethings move over that pixel

he vjc4!

i didn’t find the possibility to remove the background with the kinect node, as you mentioned. I just copied the nodes from the HeightField example to get a picture without background by using the Terrain node from Terrain.fx
it draws the person in black just like I need it :)

now the problem is, that it only draws the person to a distance max of about 1 metre, nothing behind that point.

but the texture directly from the kinect node shows more of course. I cant figure out how to solve this or where to change the Terrain.fx

if you still feel like helping me noob, thanks a lot!

If u need to do something with background, can use this

… no text …