Kinect provides no depth image

Hej hej,

I’m try to get started again with vvvv and Kinect still messing around. The RGB image and skeletontracking still works but when I open the Helppatch for the Kinect Depthnode vvvv responses with a black or white renderwindow. Troubleshooting with the documentation and forum brings no results for me. At the moment I have no idea what could be the problem.

I’m tried to reinstall the latest Kinect SDK and Windows runtime but nothing happens. vvvv is on beta31.2 x86 and my system is a core i7, 8gb ram, mobility geforce750m and Win8.1. Is it possible that the kinect has a problem with USB3.0?

Edit: I saw the latest post from bbentley81 but this brings no solution for me.



if skeletontracking works, that meanst that depth is working (as it depends on it). for more see my answer here:

yes I have also tried both an HSCB node and a levels node and I am still not seeing anythign remotly like what is comming out of the msSDK.

Freako, are you seeing the red IR light trun on when you enable depth. I am seeing mine. That means the camera is on.

I was also worried it could be my graphics card. it is a radion, but you have a geforce and are haveing the same problem. so at least we can rule that out.


tried this. The only result if Levels ist enabled the renderer shows a white screen. I mess around with the settings and get no result but a white screen. If I disable Levels the renderer shows me a really dark surrounding as you can see in the attached image. I have even no longer control of the anglemotor. If I change the value the Kinect only responds with a short stutter.

this is exactly what it is supposed to look like. levels with the right settings can help you improve contrast if you need to see it better, but for further processing in a shader you don’t need that.

the image shows a valid depth image. what do you expect to get from the depth node… maybe you have a wrong idea about how the depth image should look like?

Ok I am getting this same image, BUT when I enable the levels node everyting just gose white. I seem to rember a bit ago I had to add a changeformat in there some were.

please advise??

ok so I have managed to get something. dosn’t really look like what I had befor but its something.

Will the unfiltered dath textrue still work with pipet?

kinectDepth.v4p (17.8 kB)

Will the unfiltered dath textrue still work with pipet?

Yes, it should. (and will be a bit more precise than with the old 8-bit depth image)

is that why the image is so dark now. it went from an 8 to 32 bit image. if that is the case, is there something like an exposure node that will allow me to camp the output ot a new range. I feel like this solution is a little jammy.

not sure what 8-bit depth sebl is talking about. the depth has always been 16bit iirc and has always looked like that. only the openni-depth has an option to switch to “viewable” which internally scales up the values. the same thing you can do using the levels node as mentioned above.

Hmm, I guess I will have to work with what I have for now. I do miss the visually delicious version were peoples’ silhouettes would gradually fade to black the further thy got away from the sensor. On a purely visual level it was quite pleasing to watch.

… no text …

QuickDepth.v4p (4.6 kB)

I don’t know why but the anglemotor starts working again. At the moment I’m playing around with the depthnodes and skeleton-tracking and try to figure out how i can lay a particlesystem or some primitives (e.g. cubes, quads, spheres) over the tracked persons. Further this primitives should be changed by music and movement in shape and color. The rest of the room should be “removed” or presented as possible statically in any form. I don’t know if this is possible only with the depthmode but hope to figure it out the next days.


I am also doing something similar to what you are talking about. Although right now I am just using the black and white image as a texture that gets modulated by music and sound.


some time ago I had found a node / patch that only gives out the moving content but looks a bit messi. Unfortunately, I have not gotten to test whether it delivers the desired results. On the weekend I’ll a look if the node / patch works in that way and if it’s possible to get some primitives tracked with that way.