NUItrack port (urgent!)

[Edit: title of post changed from “Intel RealSense SDK port” to “NUItrack port” based on obsolescence of features pointed by @motzi below]

Hi vvvolks,

For an upcoming project, we are urgently looking for a port of the Intel RealSense SDK to vvvv/VL.

We are working on a couple of installations involving body & face tracking, and while good ol’ Kinects would surely do the job, the tiny size of Intel’s would help a lot on the integration side of things; and this would also be a good opportunity to have a more conveniently available and supported hardware/software combo for 3D tracking (until Microsoft Azure shows up and changes the game, possibly?).

I did not have the time/opportunity to dive into VL just yet, so maybe this is not that big of a mountain importing/wrapping the SDK .dll, but we’d rather delegate that to people more fluent on this workflow so we can focus on the other tasks involved.

I have seen threads and posts with first attempts at it, mostly focused on the depth image, but we need the higher level functions as well, especially the face tracking and body tracking.
I’ve found (unfortunately outdated) links from @microdee pointing to a “simplified pipeline” draft, so maybe this has even grown into something since then?
I even found this contribution on GitHub (straight from Japan!) that claims RGB, Depth, hand/face tracking, speech recog, etc. but I can’t have it to work (I’ve shot an email to the dev)
and I’ve seen users like @Aurel already suggested a community joint force to make this proper port a couple of times.

So let’s do it?
We are willing to fund this (in part or in whole, depending on the price tag), and would release this back to the community, as this should make a lot of us’ lives easier.
Of course anyone willing to jump in and participate in the effort is always welcome!

This is pretty urgent as the project is about to get confirmed in the very next days, and production will have to get on solid tracks (including work-in-progress reports) almost right away.
Event is scheduled end of June, but functional prototypes must be delivered to the client in 2 to 3 weeks tops (May 27th); so while we can work on the content and other tasks in the meantime, we’d need the RealSense side of things within 2 weeks for proper integration…
In case we can’t make it in time with Real Sense, we’ll fallback to Kinects.

Up for it? Hit me up!
Already done it? Hit me up!
Want to hit me up? Hit me up!
>> contact/at/ExperiensS/dot/com

Sidenote: another route would be to approach it NUItrack style, like @schlonzo and @robotanton have been playing with here, which would have the advantage to be sensor agnostic and maybe reduce the hassle of hardware-dependent efforts.
(btw any performance/feature thoughts or experience as to Nuitrack VS Intel’s SDK?)

And finally here are a bunch of Intel resources for your reading pleasure:

Dev documentation portal

official .NET wrapper

C# cookbook

Face and head tracking using the Intel RealSense SDK

Face Tracking tutorial

afaik intel kicked the hand and facetracking capabilities with version 2 release of the sdk. the last links and pdf are therefore outdated - you’ll just get the rgb and depth images from the camera…

looking at the meshed depth cam feed I can see why - it comes with 10cm noise across the board

this cam is probably great for registering presence, and quantifying it on a per-person scale, but it is far from the fidelity of the late kinect

seems to me intel did a good job to invent a cool cam for obstacle avoidance, but it will not be able to “see” a dance, in case the obstacle is actually human


Oh wow, that’s a bad piece of news!
Indeed the discontuation of the “Percuptual Computing” side is confirmed by Intel here :(

Yeah when digging through the internets there’s a lot of comparison of RealSense vs Kinect, and out-of-the-box there’s no debate indeed… This creepy one is a good example haha
Though you often read haters hating about how RealSense is meant to be “developer oriented” and that you just have to do the work that Kinect SDK is doing for you.
Intel even provides a post-processing good practice paper.

So maybe this removal of the tracking features is an emphasis on how this is dev hardware, providing raw streams only, that has to be cleaned out and processed by middleware or the app itself…
I really don’t know if the hardware itself is that bad compared with kinect; the fact that it incorporates hybrid stereo/IR tracking sounds like the best of both world (on the paper…).

To me it almost sounds like the best argument for going the NUItrack route?
Even Orbbec does provide body but no face tracking for instance.
A NUItrack port will allow consistency of features and patching across all possible hardware.
(NUItrack seems to give pretty good skeletal results even on a dirty RealSense depth stream)

NUItrack pro’s licence is quite affordable on top of that.
They announced a new licencing at 39$/year that is sensor and OS independent.
And the free version seems to only have a time-limit so that’s still easy to prototype with.

So anyone’s up for a NUItrack port?
(@robotanton, @schlonzo, @neuston, did you go any further on this since this thread?)
Changing the title of the topic to NUItrack to reflect the new turn.


We just experimented with the orbbec and realsense.
ATM I can just recommend kinect v2 for production. Still a lot of sensors out there.
The noise of the realsense is really bad. Not hating, just sayin.

Did you try further post-processing though? Not hating, just asking ;P

And did you get NUItrack running for all those Kinect/Orbbec/Realsense experiments?
or just from the depthimage quality you chose to stick to Kinect and dedicated nodes?


the noise of the realsense is not glitching around a center, but rather creating different curves, depending on the viewing angle. so I guess there is no point in filtering the signal. Intel themselves tried and have a build-in option for filtering. not very successfull.

and we never tried out the NUItrack. I just stubled upon it. so sorry, can’t help you.

i’d just buy some used kinect2 and use the proven vvvv nodes. everything else is apparently lot’s of work and has worse functionality.

We are grabbing some more Kinect2 as a plan B for sure, but would still like to jump on this opportunity to go the NUItrack route for future-proof-ness :)

Anyone up for the (paid) VL port is still welcome,
otherwise we’ll probably arrange a quick OSC or ZMQ server from their C++ examples.

Thank you all again for your feedback!