Firstly, thanks tmp for the amazing contribution.
I am trying to get it to work with kinect 2 and as recommended I started with the pointcloud examples.
I am using the drivers from: kinect2-nodes
The ones build by noobusdeer (thanks btw!) and kinect latest drivers.
First issue, is there are two rgb-depths kinect2 nodes:
RGBDepth and DepthRGB
If I connect the depth, the RGB and the RGBDepth (or the depthRGB) on the pointcloud, I can see nothing being generated.
If I connect the depth, the RGB and the DepthRBG (without the depthRGB being connected on the kinect Runtime) I can see the pointcloud.
My question is, what does the RGBDepth do and am I missing something from the pointcloud since it is essentially not using it?
first of all you should use the rgbdepth node (by sebl).
it delivers a texture that contains the offsets from the depth to the rgb image (since the texture coordinates in your depth frame are not the same as in your rgb image because 2 different cameras are used for depth & rgb)
i cannot help you at the moment, because i have no kinect2 right now. but did you make sure that in your kinect2 node the enable color pin is set to 1?