Motion detector


I try to make a project with motion detection. I am not a programmer, so untill now I used a “ready-made” solution found in the last post from here: ((forum:freeframe-vc9-c Now I have a system with 64 bit, but the motio.dll from there works only with the 32 bit version of vvvv. And also it would be important to localize a little bit preciser the coordinates of the motion: instead of the 8x8 grid like the given example, 16x16 or even 32x32 or moore. Somebody can me point into the right direction? Thanks!

Hey Anegroo

some more information on your project will help forum members give you some options with regards to motion tracking.

What motion are you tracking? What should the interaction do? Are there any hardware constraints such as lighting, expense, speed of motion, etc?

Hi gaz, thanks for your answer.

I put the link above, because I thought, through this you can understand better what I want.

The actual project is a simple interactive projection screen. The visitors are moving in the front of the screen, and their motion is captured from above by a webcam (with IR filter removed). The different coordinates of detected motion start different events, and something appears on the screen, or starts a sound. So isnt too complex, but I dont know where I can find a simple solution like the mentioned motio.dll. I would like to contact the author of the dll, to ask, to convert it to 64 bit and to add moore resolution, but he is no more present on vvvv forum.

I tried the contour node, but it is difficult to achieve similar results like the dll, and the memory usage is also many times bigger.

Try with assus xtion rgbd cam and openni to track your visitors, use the user node to obtain centroids/center of mass of each person and use that coordinates to trigger your events. It works in about 4x3 meters are, so it depends how big area you want to track. Good luck

yea, kinect seems easiest
with webcam you can do a pipet approach


Thank you zeos and antochio. I tried the watchdog linked by antokhio. The webcam patch seems very close what I want, the motion detection part is working fine. But how could I get the coordinates of changing tiles from the Pipet output? I would like to start different events triggered by the spatial position of detected motion…

Use Trautner !

Hi io,

Thanks for your suggestion, but could you tell me how can I get the coordinates of the changing pixels from Trautner? I think that is not possible, this info is missing from its output. I tried the Contour node, which have outputs with the coordinates of the changing parts, but it is using very much memory, and I can`t get to use it directly for my scope. Something else?

Trautner needs an input mask as a BMP file. On that BMP file you can paint as much as 256 different areas which are represented by shades of grey (beware of antialiasing).
Trautner will tell you how many pixels are changing on any of those areas, you possibly just need to see in which area the most pixels are changing and get the position of that area in the spread. Have a look at Sort and CDR nodes.
For example you could import a screenshot from your camera and paint the mask on top of it, or create a grid of areas from within vvvv itself.

hi anegroo,

try i out a tracking system for kinect i made while ago. it uses kinect so it is resistant to changing light conditions. it is basically a presence detector of persons/objects at certain locations, however you can easily turn it into a motion detector.

it’s easy to make grids, although 32x32 might get cpu intense.


let us know how that went! :)

hi id144,

Thanks for your patch, it is something really great. I started to play with it, and changed the Kinect texture with the image captured by my webcam. The image appears at the top of the 3D space, and there are present the particles too, but now I dont understand how the tracking engine works. I found the video with the working installation on the web, and I understand how it interacted with people. I want something similar but with some visual interactivity too. Its interesting that it was presented in Bucharest, because I also live in Romania :).

You need a Kinect to use id144’ s patch, if you have a camera either use Trautner or as Anthokio mentioned Pipet.

of spreads and slices
read this also

OK, but I shall continue to not understand, how is possible to get the changing pixels coordinates. For example, in the attached example trautner does not output a slice of values from what I could get the coordinates. In the case of Pipet I get the slice with the changing pixels, but I dont know how to transform the slice index into coordinates. And you can see the changing pixels position doesn`t match with the activated buttons (I know that Pipet handle the min-max value in a different way than the rest of nodes). I am not an expert in vvvv… My concrete question is: how could I trigger for example 10 different events defining 10 areas in this 16x16 grid?
I would really appreciate any help.

coordinates… (47.8 kB)

try this one

Coordinates…[Pipet or Trautner] (1).v4p (30.2 kB)

Yes, this is what I was searching for. Now I only have to make the triggering part. Thank you very much!