Hey guys,
so, I have a pretty specific need but am not sure about the best way to tackle the problem. Maybe you have some pointers in mind.
I am building a 3D persistence of vision display and want to live-display a person on it. Think star wars holo-communicator.
I’ll use kinect to scan the person and want to “sample” its point-cloud in 3D space. So imagine a 64x64x64 cube with the pointcloud in it. I want to find out (in realtime) where a kinect-point overlaps with a sample point of the cube which will correspond to a lit pixel in 3D space on the display.
The output is a little tricky. I am constrained for bandwith so I wanted to pack 8 pixel into one byte (making the display monochrome) and send those bytes via TCP to my display controller
Soo… what do you think is the best way to have a kinect-input and this very peculiar byte-sequence as an output? Is a compute shader the way? Maybe instance noodles? Or is there a simpler, node-based way I am missing?
for https://www.youtube.com/watch?v=Jgr6_tVoRrM we had similar problems to tackle.
the idea back then was that all 3d content is just a set of basic 3d elements: lines, planes and maybe spheres. But as i recall it most was modeled as lines. Thick lines that is (=cylinders). So basically all scenes were just animated lines. In your case you could take the skeleton an translate each bone into a line.
Now what you would like to have is a way of sampling that 3d scene and get a bool if inside or outside. For that you’d calculate the distance to each line(segment) and if smaller than a certain value you are “inside” the line. So basically it is a signed distance field what we are talking about. Back then we were having a shader + readback.