Intel Depth Sense + skeletons, anyone?

Has anyone tried the Depth sense cameras with skeleton tracking, particularly in low light? It requires a separate SDK, but I was wondering if any one had details on performance ( and also CPU load) I’m planning on using nuc’s …

You can check cubemos performance with the example application on the 30 day free trial.
Why are you considering using Depthsense + Cubemos over other solutions like Orbbec or Nuitrack if I may ask? (That are also already available to VL)

I was wondering what the options were to be honest, what’s the latest with the others, I thought Orbec had low light issues too? I need skeletons more than depth, but in IR preferably

I can test Astra Body Tracking in Low/Zero Light in the coming days for you, I have one at home.
I also tried the Cubemos Demo with the Trial on an Intel i7-6700 and all cores were averaging at 30-50%, the App running on 40% CPU usage in total, +20% GPU Usage on a 1080 Ti … but it was running at about ~20 fps only while being very solid. Didn’t look further into it as it didn’t seem to fit the scenario I’m facing.

Not exactly sure how much Cubemos benefits from the depth of the D435 I have (or if it’s using it at all?), wasn’t sure if it maybe even just does everything in RGB like OpenPose. Don’t have a regular webcam to test right now …

As Nuitrack Bodytracking doesn’t rely on the actual bodytracking implementation of the hardware being used but does its own thing instead, the quality of skeletal tracking in low light might also be a different for Nuitrack vs Astra SDK with the same hardware.

Kinect is still the best solution?

I don’t know, that’s not what I said and I didn’t compare to Kinect … I don’t want to use discontinued hardware.

You could also check out mediapipe pose. GPU acceleration on desktop only works on Linux or in the browser (js) though. Also no idea how it fares with black and white (IR) images. Here’s an in browser demo:

1 Like

Quite a bit off and a bit slower (15fps) compared to cubemos, but nice to know, thanks

I’m looking at a few RGB only solutions (wrnch, deepmotion, openpose.Net and hand-cranked examples) and I’ll report back here.

Note that most promotional footage is showing best results.

What I’ve seen so far is that they work pretty well if you stand a specific distance from the camera and move along the x plane, but they produce pretty mixed results with movements to or from the camera. Secondly you’ll see a lot of jiggle and this is down to how the neural nets work. I’m not sure about 3D yet, but I think that in 2D the joint estimations of position are made from a 64x64 grid*. Joints that fall between two points on the grid ‘activate’ each point and the strength of those points determine where the joint is pulled closest to. So it depends upon how accurate you need to be.

Going backed to RGBD though, the skeleton tracking on the Zed 2 looks pretty compelling.

*From my own small experiment

Do you have a Zed? What is the low light perfomance like? I need to track outdoors at night, so at best street lighting as only illumination. Ideally I’d like IR skeletion tracking, which seems to have been sacrificed at the alter of ML, unless MS patented it I guess

Just got my networking patch all working, and test on a latte panda v1, send just the skeleton data, I get an average frame rate of 50 fps, and a minimum (odd spikes) of 20 ish, with a atom x5-Z8300 processor, with streaming the depth texture as well average 25fps, mins of 10. On a latte panda alpha it was streaming texture and depth at about 50. This is what I find frustrating with these new cameras, the skeleton tracking for Intel depth sense, min specs are a new i5 processor and gpu. I call for luddites everywhere to smash the AI ! Just wish they’d kept making kinects, or bundled a risc processor in the azure!

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.