December 22, 2014, 1:13pm
Firstly, thanks tmp for the amazing contribution.
I am trying to get it to work with kinect 2 and as recommended I started with the pointcloud examples.
I am using the drivers from:
The ones build by noobusdeer (thanks btw!) and kinect latest drivers.
First issue, is there are two rgb-depths kinect2 nodes:
RGBDepth and DepthRGB
If I connect the depth, the RGB and the RGBDepth (or the depthRGB) on the pointcloud, I can see nothing being generated.
If I connect the depth, the RGB and the DepthRBG (without the depthRGB being connected on the kinect Runtime) I can see the pointcloud.
My question is, what does the RGBDepth do and am I missing something from the pointcloud since it is essentially not using it?
December 24, 2014, 1:04am
first of all you should use the rgbdepth node (by sebl).
it delivers a texture that contains the offsets from the depth to the rgb image (since the texture coordinates in your depth frame are not the same as in your rgb image because 2 different cameras are used for depth & rgb)
i cannot help you at the moment, because i have no kinect2 right now. but did you make sure that in your kinect2 node the enable color pin is set to 1?
yes, use Sebl’s node and set raw data to 0 if I recall correctly. Then you have a uv map you can use to correctly sample the rgb.
//@help: template for texture fx
Texture2D texture2d : PREVIOUS;
Texture2D uvTex ;
SamplerState linearSampler : IMMUTABLE
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
float4 p : SV_Position;
float2 uv : TEXCOORD0;
float4 PS(psInput input) : SV_Target
float4 tUV = uvTex.Sample(linearSampler,input.uv);
float4 c = texture2d.Sample(linearSampler,tUV);