as far as i know dx11 kinect node outputs texture in specific format, witch is 12 bit something, so it might loose some data unless u process it correctly.
Yeah, I ran into the same thing when I did a DX9 shader for this. Your sampler state is causing points to be interpolated, so edges next to black (no data) get lower values, making them look closer. Change it to this:
Hmmm, your distances don’t look right; if you want meters, you should take the depth pixel and do this:
float depth = pixel * 65.535 ;
The rest of the formula you are using looks unfamiliar as well, as you are not taking the horizontal and vertical fields of view into account. Here’s more or less how I do it after converting the depth pixel to meters:
Hi, could you please explain briefly how this patch is to be used?
I am trying to get a point cloud going with the MS Kinect nodes and the new DX11 nodes, but have not fully managed to understand what it is you are trying to accomplish with this patch.