we’ve started to work on Intel RealSense Depth Cameras support for VL.

The nodeset will give you:

  • Color Image
  • Colorized Depth Image
  • Point Cloud (incl. filters provided by SDK)
  • Depth Pipet
  • Full control over RGB and Depth sensors
  • Full info about the Device, Streams, Sensors and Intrinsics

If you have any ideas, comments or wishes, please let us know.

The development is sponsored by wirmachenbunt.
How cool is that! Thank you very much.

Github repo with instructions how to use the package (in alpha or gamma):



  • Camera runs async
  • Added Motion (IMU) data for D435i cams
  • Added GetIntrinsics node
  • If you’ve missed it: the package is available via nuget.org (check the Github page for instructions)


Related topics:


Hooray! So looking forward to this. How to test? And will this include the IMU data for D435i?

Hello guest,

IMU data should be easy to get as soon as we have stable pipeline.
But we don’t have D435i to test its IMU, so I would ask you to test it when we’ll reach the point.


Some progress:

1 Like

And… now for the brave:

Please test and report.
We are WIP.
(IMU data, multiple devices, multithreading, etc… are still not touched)

@robotanton , thing is we need the realsense in vvvv beta and don’t care much about how it is implemented in VL. it would be nice to have an easy way to test the thing instead of this 4 steps instruction inlcuding having to compile things and making references somewhere (which doesnt work).

can you please provide a zip with a working project ?

Hi @u7angel,

please check the Github repo again.
Now we have a ready-to-install nuget tested with the current alpha and gamma.

Hope it works for you.


Hi, I have some difficulty to install. It is always saying the same :Untitled

@Aurel what exactly did you try to get this error?

I open any girlpower VL patch and after having installed as explain in the Github :

nuget install VL.Devices.RealSense -prerelease!

So then I try to use Realsense device selecting the Vl Nugets. here come the error.


Hi @Aurel,

it looks like you’re using the beta.
Would you please try it with the current alpha instead:


Thanks Anton,
I tried with Alpha 64 & 32 but no menu in VL to go in Commandeline.
When I go to User…\Documents\vvvv\beta-preview\Packages> nuget install VL.Devices.RealSense -prerelease

Nuget is not recognize…? is this normal
In User…\Documents\vvvv\
there is only beta-preview…

Don’t see how to do.

Hi @Aurel, CommandLine menu in alpha was moved to the quad menu:


Hope that helps.

Ok it’s works, Thanks for your help.

Great work @robotanton ! Works out of the box and as aspected. At least the vvvv Gamma part. :)

The range for the D435 and D415 are specified at ~11m. Well… didn’t expected good results in this distance, but my cam (D435) seems so be limited already at a depth of 3,7m. Tried different settings and light-situation but nothing really changed. What did you exprienced @robotanton or @u7angel, what was your maximum z-depth with the RealSense?

Just found the threshold for the depth-clipping. So: Everything is awesome!


Any chance, to get a “raw” depth image coresponding to the Kinect-depth image with floats in the R-channel for the distance in mm?

Searched fo it and tried several node-combinations and channel-settings for the depth-stream inside the “Config”-settings of the RealSense-node. But no success. Also the “Depth (Raw)”-node seems not work as expected.

Good news though: We managed to add the PointCloud-output as pos-buffer. Which works out of the box with DX11.Particlepack. So, fast homebrewn gpu-pointcloud-visualizer: ✔
But would be nice to also approach this texture-based, as described above. Any ideas?


@timpernagel Could you please provide an example or just explain the way to have a homebrewn gpu-pointcloud-visualizerGpu-Pos-Buffer in V4 with Particlepack ?

@Aurel will look into the realsense-project by the end of the week and will post the steps what to do here.

I just received my D435i, super tidy little camera! I have loaded up the demo patch, but not sure how to get it out into V4, and then how to pull that data into V4 to manipulate it, be that the pointcloud, or jhow to pull the video feed out into a dx11 renderer. Pretty exciting stuff!