for context: http://www.grasshopper3d.com/video/kinect-grasshopper that is what i would like to do, but with mighty vvvv instead of processing. She is sending the data via udp too (a loooong coma separated string - i guess), but if there is a way to do it using less resources i would prefer it.
@antokhio I could reduce the number of point in grasshopper.
All i need from vvvv is a bunch of points in space and a good way to send their coordinates to grasshopper. In grasshopper there is a tool which analyses the brightness of a still picture. Isn’t there something similar in vvvv: the brighter the pixel the higher the value? With that i could just put a matrix of points “over” the depth texture and evaluate the pixels under each point for its brightness. That value would be the Z-coordinate of that point. This would result in a very simple point cloud. <— strike that!
thats what Pipet (EX9.Texture) is for. thanks bjoern. i think i can take it from here. i will post my vvvv and grashopper solution when its done.
@andresc4, hi i’m very very amateur in the vvvv langage, and I download your “pointcloudbasec4.v4p” and when I open it in vvvv, I just have this ( look at the picture please), how I can see on the renderer the visions of my kinect v2?
My second question is how I can export the renderer in a sequence of .obj files to after import in a 3D software.