Anyone know how to get the actual depth data from the kinect? All the plugins I’ve looked at here do a histogram and scale the data in each frame (AGC) so the pixel values have lost their absolute meanings.
Although the OpenNISkeleton source can be modified to not do the scaling, that code no longer works for me, and Kinect_OpenNI_Libary_1.0.zip contains no source.
Any ideas? Thanks!
from what i heard the AGC comes from OPENNI and there is no SWITCH to get it fixed, a solution is to use the first kinect plugin by vux, with its bugs and limits, but with a correct depth image.
try get in contact with phlegma or hierro for the plugin sources.
until know there is no chance to switch the AGC on/off in the OpenNI Libary.
At least in the first plugin, the AGC is being done in the plugin. I was able to get it to work (it had a hard-coded path to the config file), and removed the AGC (histogram-based data scaling), and am now getting nice constant 11-bit depth data back (in millimeters).
But now the issue is how to handle the 11-bit data; I’m loading the 8 LSBs into the B channel, and the remaining 3 MSBs into the G, and adding them together outside the plugin using Pipet. I tried returning a 16bpp greyscale texture, only to find vvvv seems to always treat textures as 32bpp ARGB.
Is this true about vvvv and textures? I’m developing some other effects to do processing of the depth data, and would like to standardize on the data format so they all play together nice. As a minimum I will probably shuffle the data so the 8 MSBs are in the B and the 3 LSBs are in the G; this will make the B channel directly usable as an image with the G then available for more resolution.
I’ll post the modified plugin as soon as i get it cleaned up a bit more today.
Correction, as the depth data being returned is already scaled by OpenNI into MM, there are 13 significant bits, not 11.
I think the plugin scales it to 8 bit if I’m not correct… maybe try reproducing the technique for more bits?
I would like to see this happen as I too hate the histogram :P
OK, I posted something in contributions that does this in a sort of hack way. But it appears to work - and my top three priorities of any project are (in order): 1) it works, 2) it’s maintainable, and 3) it’s elegant. 2 out f 3 ain’t bad!
I’m working on a couple of effects to process the data in this format and will post those soon; one to convert it to a familiar but non-AGC depth image, and another to slice the data by depth. I’m sure other fun stuff can be done with creative use of pixel and vertex shaders.
Sure wish we could work with 16bpp greyscale textures…
mediadog you are able to get the distance in mm from the kinect to any point on the image ? how aquarate its near the end of the work range ? ( about 4meters )… i understood its not a linear scale and it decrease resolution on the end of the working distance. is this right ?