I am a multimedia dev who is just taking his first steps with vvvv. I’ve got an idea for streaming 3D pointclouds over the network, using the DX11-pointcloud pack.
I read up some of the messages from intolight, the authors of the DX11-pack, and they claim that they have published or were working on some networking nodes that would allow for sending and receiving pointcloud data? Although I found the repository, I can’t seem to find them.
What I would like to do is to basically read out the RGBDepth Data + RGB from a Kinect, “pack” it somehow (eventually adding LZ4 compression to minimize the data), send it over the wire, receive, unpack, and display it.
I am using the Dx11 and Dx11-pointcloud packs. It is however unclear to me how to just visualize the two / three streams from the Kinect2:
the RGB stream (in full HD resolution, 1920x1080)
the Depth Stream (16 bit depth data, usually displayed as grayscale bitmap)
the combined rgb+depth 3D pointcloud (has no defined datatype yet, as every point would have 24bit color + 16 bit depth data, thus yielding to a 40 bit wide data structure)
My questions are:
how do I simply visualize the different streams coming from the camera without having to set up renderer? Can’t I just visualize the rgb video data as a texture bitmap somehow?
is there any compressor node available (similar to LZ4) that can compress pointcloud data in near-realtime? I am aware of a mjpeg node, but have yet to look into it. I am basically looking into ANYTHING that can crunch the data down for me.
can one create executables (.EXE files) from vvvv patches, and if so, how? I am creating this setup as a prototype and would eventually like to distribute it to interested parties.
my goal is to merge multiple pointclouds (e.g. 2 kinects2 running each on a different machine) on one machine that then serves the merged pointcloud to interested parties in realtime over the network.
well to make it properly work i suspect you need to write an extension to kinect plugin in c# that would deal with packing all the data directly from kinect api, avoiding vvvv part which sends it directly to gpu…
[quote=“Glaze, post:1, topic:14764”]…
and they claim that they have published or were working on some networking nodes that would allow for sending and receiving pointcloud data? Although I found the repository, I can’t seem to find them.
not sure if it is still working, but here you can find stubs:
it is simple, really. readback the points you are interested in from gpu, compress data with snappy or something else, if necessary, and transmit. On the other side, recreate or merge with recepients pointcloud.
As you can imagine, the more points you can omit on the client (like background points), the more efficient the process becomes, and quickly outperforms transmitting the entire rgb/depth textures.
Thanks to everyone for answering, this was really helpful. I will check out the different projects.
The Network modules from the dx11-pointcloud project was what I originally was looking for, but i will also have a look at the ZeroMQ project.
Does anyone know by any change how to configure the Kinect in such a way that it separates the pointcloud with a human in a given image? I know how to do this in C#, but I am not aware of the way to do this in vvvv using the Kinect2 node from DX11-pointcloud…
Also, how do you write custom C# code for a new node in vvvv?
I am very well aware that this is basic stuff for vvvv, I am just struggling a bit to get my head around it. I want to use vvvv for quick prototyping and feasibility testing.
Also, if the short answer to being able to create an executable file from a vvvv patch is no, I presume there is a also a longer one? Is it possible to create a vvvv-based client application and distribute it?