@robotanton , thing is we need the realsense in vvvv beta and don’t care much about how it is implemented in VL. it would be nice to have an easy way to test the thing instead of this 4 steps instruction inlcuding having to compile things and making references somewhere (which doesnt work).
Great work @robotanton ! Works out of the box and as aspected. At least the vvvv Gamma part. :)
The range for the D435 and D415 are specified at ~11m. Well… didn’t expected good results in this distance, but my cam (D435) seems so be limited already at a depth of 3,7m. Tried different settings and light-situation but nothing really changed. What did you exprienced @robotanton or @u7angel, what was your maximum z-depth with the RealSense?
Any chance, to get a “raw” depth image coresponding to the Kinect-depth image with floats in the R-channel for the distance in mm?
Searched fo it and tried several node-combinations and channel-settings for the depth-stream inside the “Config”-settings of the RealSense-node. But no success. Also the “Depth (Raw)”-node seems not work as expected.
Good news though: We managed to add the PointCloud-output as pos-buffer. Which works out of the box with DX11.Particlepack. So, fast homebrewn gpu-pointcloud-visualizer: ✔
But would be nice to also approach this texture-based, as described above. Any ideas?
I just received my D435i, super tidy little camera! I have loaded up the demo patch, but not sure how to get it out into V4, and then how to pull that data into V4 to manipulate it, be that the pointcloud, or jhow to pull the video feed out into a dx11 renderer. Pretty exciting stuff!