we’ve started to work on
Intel RealSense Depth Cameras support for VL.
The nodeset will give you:
Colorized Depth Image
Point Cloud (incl. filters provided by SDK)
Full control over RGB and Depth sensors
Full info about the Device, Streams, Sensors and Intrinsics
If you have any ideas, comments or wishes, please let us know.
The development is sponsored by wirmachenbunt.
How cool is that! Thank you very much.
Github repo with instructions how to use the package (in alpha or gamma):
Camera runs async
Added Motion (IMU) data for D435i cams
Added GetIntrinsics node
If you’ve missed it: the package is available via
nuget.org (check the Github page for instructions)
Thanks to the new (super cool) .NET libraries vl’s feature I try to use the libreRealsense library. The lib seems to works great with vl. I’ve access to streams, few options and functionality. My issue is I can’t pass the depth stream to vvvv in his orginal/similar format “Z16” . For now I have to use the “colorized” depth texture instead, which ends up to vvvv in “R8G8B8A8_UNorm”. I guess this colorized depth texture is designed for visualisation and not to transmit the maximum depth …
looking at the meshed depth cam feed I can see why - it comes with 10cm noise across the board
this cam is probably great for registering presence, and quantifying it on a per-person scale, but it is far from the fidelity of the late kinect
seems to me intel did a good job to invent a cool cam for obstacle avoidance, but it will not be able to “see” a dance, in case the obstacle is actually human