so, I have a pretty specific need but am not sure about the best way to tackle the problem. Maybe you have some pointers in mind.
I am building a 3D persistence of vision display and want to live-display a person on it. Think star wars holo-communicator.
I’ll use kinect to scan the person and want to “sample” its point-cloud in 3D space. So imagine a 64x64x64 cube with the pointcloud in it. I want to find out (in realtime) where a kinect-point overlaps with a sample point of the cube which will correspond to a lit pixel in 3D space on the display.
The output is a little tricky. I am constrained for bandwith so I wanted to pack 8 pixel into one byte (making the display monochrome) and send those bytes via TCP to my display controller
Soo… what do you think is the best way to have a kinect-input and this very peculiar byte-sequence as an output? Is a compute shader the way? Maybe instance noodles? Or is there a simpler, node-based way I am missing?
Thanks so much!