Kinect 2 Tracking to Control Camera View

How would you use Kinect 2 data to control the camera to create the illusion of a 3D view? Similar to the classic Johnny Lee Wii example (weird link since I’m too new to post direct links - youtube / watch?v=Jd3-eiid-Uw?t=165) that has been replicated in a lot of other software.

Using “3D basics & building interaction” from NODE17 I learned the basics of using the Kinect2 node to identify joints and display coordinate data, but I wasn’t able to find a way to use that data to control camera view.

I would also welcome any other good vvvv + Kinect tutorials. In the meantime I’ll be looking at all of the other NODE17 materials.

Thanks!

its basically two steps, match the vvvv/kinect space with the physical world space. then use PerspectiveLookAtRect connected to the head position to generate your camera for the renderer. of course, the details will take some time…

1 Like

Thanks for the reply!

Ah, if it were easy everyone would be doing it. I have a decent understanding of how the Kinect 2 gathers and outputs information, but am lost on the steps you’ve provided. I’m a beginner with only the Wes’ tutorials and a few other long videos under my belt. Could you point me to any relevant learning materials?

Check this

2 Likes

Wow, thank you. Not sure how I missed this, I searched every form of head tracking, face tracking, etc.

Hi all,

Please note that @mediadog created a patch off of the @u7angel’s “Track your Head like Johnny Lee” patch with Kinect. It’s the old Kinect v1 node, it’s not connected to anything, and I haven’t had time to try it with the Kinect2 node, but here it is: https://vvvv.org/contribution/track-your-head-wkinect.

Yes, it’s actually quite simple, just use PerspectiveLookAtRect, with the camera position input being the tracked head position, and the screen size and position (relative to the Kinect) for the rectangle. That’s all there is to it.

Oh and make your environment all on the same scale and relative to the Kinect as well.

I had to fool around a bit with the Near and Far settings, namely making the near relative to where the camera is.

This works for large projection screens, and by doing this with more than one screen you can make a CAVE.

Hey @u7angel and @mediadog,

Thanks so much for the original patch and the updated Kinect patch, especially the notations for beginners. I was able to get the patch running with Kinect v2 working. Really interesting to look at and explore.

I switched the solution to my post that links to mediadog’s updated patch since it directly applies to Kinect v2. For anyone else also just starting with vvvv and Kinect, I strongly suggest watching NODE17 “3d Basics & Building Interaction” P1.

1 Like

@vvvvProj You’re welcome! And one more fun note: objects do not have to be “behind” the screen in 3D space for this to work - you can position objects in front of the screen/look-at rectangle (-Z space) and they will still be properly rendered. If you do this by rendering both eye views with a 3D projector or monitor, you can put objects out in front of you. I used this where you wave your hands and trails of bubbles come out of them, pretty fun effect.

Damn, lots of stuff to try out. Thanks!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.