Hello.
I need to stream kinect 2 depth data from one PC to other PC over Wi-Fi where it will be processed by installation software (probably done in vvvv).
One of the goals - the smallest possible latency.
I tried to find solution yesterday and the problem seems to be harder than I expected.
Can anyone advice me on this problem? What to try first?
This looks like possible solution (openFrameworks):
The problem with streaming the depth image is that lossy compression will severely compromise the depth data. On the sending side I convert the depth image to a point cloud using a shader, using bounding boxes in the process to reduce the data to just the regions-of-interest, then send the data using simple sockets. I also convert the XYZ data from floats to integer millimeters to reduce the data going over the wire and convert it back to vvvv floats in the receiver plugin.
This won’t work of course if you really want the depth image, not pointcloud data, so depends on what you’re doing on the receive side.
What do you mean by point cloud? You mean you threshold it, then output only “white” pixels in a buffer, so you need to transfer only part of data or what?
here are two patches showing streaming over TCP and locally via the Shared Memory. These are girlpowers from the upcoming release.
As far as I know the AsRaw (EX9.Texture) and the DynamicTexture (EX9.Texture Raw) will be ~30 and ~10 times faster respectively. For now please use the current alpha.