Azure Kinect Pointcloud node Memory Leak

The point cloud node is allocating a lot of memory. In 5 minutes Ram usage for the pointcloud Howto went from 1.8GB to 4.3/4.8GB occasionally going down but generally trending upwards. We have an installation that currently uses this node and we found that half way through the day it would use 12GB and then crash the application.

This applies to nuget version 1.4.1. 1.4.3 did not work.

Curious what did not work about the 1.4.3 release (which was updated by me). Which vvvv version are you using (last version was tested with 5.2 on several computers without issue).

In this, you would find an alternative that uses PointCloudImage16 data type from the azurekinect sdk and a CS to use it.

Hi @motzi , I can’t access the PC at the moment but when I can, I’ll update this message with more detail and some images.

When running the pointcloud/skia3d example:

  • the device info didn’t show.
  • I had to disconnect and reconnect the device physically
  • It then only showed the initial snapshot of the scene as a pointcloud and didn’t update

Trying the Pointcloud/Stride demo:

  • Same problem with disconnecting, this time I decided to reduce the number of points to maybe helpp with he data throuput, but
  • The pointcloud didn’t update after the first frame, while the texture did.
  • The texture had 0.5-1second of lag.

My setup

  • Kinect using data and power over a USB C-C cable
  • USB C-C 7m fibre-optic.cable.
  • AMD based pc, connecting through the front hub (the only one available)

So unfortunately I didn’t try the 16bit stuff, because of the other issues that meant I could completely trust it to work okay on startup. In your test, do you see the ram climbing if you have over 20000 points?

There’s a comment on the PointCloud node that outputs a Spread (which is used in the Skia example):


I actually never used this apart from quick testing and from reading this helppatch comment (Overview over available nodes) I probably also wouldn’t rely on it.
I’m not at a computer with an AK right now so unfortunately cannot test.

However I remember that when using those image based nodes (like PointCloudImage16) that I would have the sawtooth-like shape in RAM usage, filling up and clearing it again, but not when using the DepthImage/ColorToDepth image nodes in combo with image to texture (like shown in the Visualize the Depht pointcloud in Stride nodes). Also I don’t remember crashes because of this. This buildup in RAM is most likely due to the way Images are released in .net where you don’t really have control over the GC which frees memory whenever it likes.

Regarding the startup-problems:
You’re saying those issues are only there in 1.4.3 but not in 1.4.1? I know that there are some startup-helpers in place (already from the original code I extended) but I don’t remember touching those. So I wonder what would cause the issues here.
I know that the data connection is quite sensible to the USB configuration (chipset and cables) and I’ve used it successfully with those long Firenex uLink USB cables going up to 16m. Did you try to use the Kinect on your computer with a short cable and also experienced the same problems?

also regarding connection stuff:
i tend to always try the kinects with the AzureKinectViewer tool first. If i get them to run there stable over a few minutes they usually work in VVVV as well.

Correct, 1.4.1 connects mostly without issue, but 1.4.2 is a handful.
I’ll take a look again, to see if the PointCloudImage16 helps

No Ouput
Output, but no preview:

Also, would I need to write a CS to calculate the hitpoints? I don’t think I have time to figure it out.

Edit (I’m using an AMD gpu, and maybe that effects the CS side of things)

regarding your screenshots above:
you are setting the format for the Color Image (RGB camera) there, which is why only ColorXXX (like ColorYUV2, …) make sense here. setting this to a format that is only valid for the depth camera apparently breaks things.

so, if only the depth in certain areas is necessary for your application, it should suffice to just use a Pipet on the Depth texture (no need for using the PointCloud nodes here).

Here’s a little patch that does this and also applies some primitive filtering by sampling multiple points around the actual point of interest and removes outliers.

Kinect_Depth_Sampling.vl (45.2 KB)

hope this helps.

(I faintly remember that for Kinect2 there were some shaders that would attempt to remove empty areas whith a similar filtering goal in mind. no idea where they are to be found though…).


There is FillHoles (and the corresponding shaders) by @microdee. This was originally on our “to port list” for the addons I think, don’t remember why we skipped it.

can you please be more specific? which howto exactly demonstrates the memory leak going up to 12gb and crashing?