Kinect and Moving Heads


I am planning to do a project with Moving Heads and a Kinect v2. Anybody here at the forum with experience in this field? … I would have a few questions if you don’t mind :)


are you interested in the kinect pointcloud analysis? or more in the dmx side?

(vl) courtesy of hayden 2018

if you don’t mind, please keep dialogue on the forum

1 Like

Hi @velcrome,

thanks for your comment. The project is finished now and I’ll report my findings here over the course of the next few days. Sorry for the late reply, there was very little time and I decided to just try things out on myself.

Back when I wrote this forum post I was not able to articulate all the questions I had in mind. Retroperspectively we had to deal with the following technical challenges:

  1. How to control a Moving Head via Dmx
  2. Are all Moving Heads controlled the same way
  3. What type of Moving Heads exist
  4. How to control more than one Moving Head with only one Dmx Interface
  5. How to calibrate several Moving Heads to each other
  6. How well translates the virtual 3D simulation to a real world scenario
  7. How to deal with the pan/tilt and inertia restrictions of a Moving Head
  8. How to deal with the Moving Heads inertia and sound syncronisation
  9. What are good ways to “sequence” the Moving Heads - no, we didn’t use the timeliner node
  10. How well does the Kinect V2 work with haze (we where using a haze machine)
  11. … I probably forgot something

I will try to answer these questions here over the course of next week.


1 Like

By the way, the project was done in vvvv, no vl inside yet ;)

@domj this looks really promising and if it would have been available to me when we started I would propably have use vl for the whole thing!

Awesome Project!! A blog post on how you solved these issues etc would be great!
Really like these behind the scenes - WIP & Workflow posts!
BIG +1

@gegenlicht … yes, that’s what I have planned. Just need a few days to recover ;)

1 Like

@m9dfukc Thanks, I’m glad to hear it could be of use. However at that time it was in the womb and now it’s still wearing diapers but walking :D

But already it could help with a number of your points, although with some slight caveats atm

For 1-4: In bHiVE (working title) you can quickly add Fixtures (or possibly other devices) in a similar way to adding nodes in v4 (double click, select), although in 3d space. Then you can set common values such as Channel straight from this view.

Each fixture then has its own patch, which can be edited on the fly, assigned from a bank, etc. The nodes in this patch fill various named fields in an internal extensible datatype. Common fields such as Colors, Rotation, Gobo, etc. The nodes can also read the fixture’s setup and global context and change their output depending on that. Meaning that you could assign the same patches for all fixtures, let’s say some 3d point follow functionality, and each light will set its Rotation value accordingly based on its location.

See this demo from a while back that had that sort of thing running, the cones represent moving heads (although no momentum or limits are taken into account in the visualisation).

Then the output of the patch is fed into an Output layer. For instance ArtNet, which implements translation of the internal datatype into DMX values and puts them on the right channel based on the fixture’s setup. It’s up to this layer to deal with mapping XYZ rotation into pan/tilt values (16-bit of course), match colors if only color wheel is available, etc. At this moment each fixture model has to be implemented as a VL patch. This might be good for some very complex outputs but eventually I would like to include some fixture editor and GDTF support.

For 5: For now the targeting is based solely on the set locations but I’ve used some bilinear interpolation based calibration to calibrate a few moving heads together on a previous project and would like to eventually implement some calibration grids into the software, just how to figure out how to integrate it well into the UI.

For 9: It’s possible and fairly straightforward to automate the assigning of patches to fixtures from a Director patch. Based on controller input, time, audio analysis, etc. Randomness is possible. There’s also transition support. For now only Fade is implemented (basically a lerp between all defined values) but it’s pretty simple to implement somewhat more interesting transitions, one-offs, etc.

I’m trying to build the whole system quite extensible but I’m also fighting with some VL quirks that make it a bit more complicated than I’d like to, but hopefully it’ll be figured out eventually.

There’s certainly more to say about it but I’ve probably already went a bit too deep in this thread, hope I’m not hijacking it too much.

In not too long I will try to release a video documenting the current state and make a separate post for it, but if you’re (or anyone else is) interested in something now, hit me up at matrix (same username) or I plan to get some preview release out ~June

Lastly I would like to say that I saw very few Kinect installations as visually clean and polished as this one, great work! And congratulations for pushing through!

Also looking forward to hear how you solved the challenges.


glad good old dmx worked out for all of us, and auto-follow moving heads is a thing now. btw thanks for granting the grand record for “Briefest Solution” ever on these forums, hehe

will watch out for vids, wip or contrib from both of you for sure

1 Like

@m9dfukc Hi, posted some more info about the software in the WIP section