VL.Devices.Kinect2

@catweasel interesting point, both of those features are packed up in a single node (each) in DX11, but maybe splitting them into a few more nodes allowing more configuration and control would be a good idea. Do you have any suggestions in particular?

Quick teaser of the current state:

Kinect2Teaser_official

You can already follow the instructions in the github repository and try it out yourself.

Nuget package coming soon.

8 Likes

Great work Randall!

1 Like

Soo, we are trying to figure out how to handle (if at all) the Frame Index output pin found in all stream nodes (RGB, Depth, IR, etc.) in DX11. As far as I have managed to understand it is mainly there for synchronization purposes.

Has anyone ever needed/used this pin? Is it important for us to include it in the new version or not? Please post your thoughts on this.

More updates, (a bit on the devvvv side)

After struggling for a few days with bugs related to the lifecycle of the KinectSensor and various streams as reported here, and only thanks to the help of @Elias, the decision was to change the design of the nodeset in a way that will better handle the invocations of creations and disposals of the different objects involved.

To sum it up quickly, instead of having one Kinect2 node as a source, and a consumer node per stream (RGB, Depth, etc.) the new proposal has just a single Kinect2 node with an output for each of the existing streams.

This is an example of how the teaser a few messages above would look in the new version:

So far in our tests everything seems to be behaving as expected with this new approach and you can try it yourself by checking out the observable-based branch on the project’s github repository.

It would be very valuable to hear any opinions regarding this change before we fully commit to it so please join the discussion and share your thougts.

Cheers!

After some discussion on the model proposed in my previous post the decision was to stick with our original plan, so per-stream specific nodes have returned and are now available in both standard and reactive versions, the same is true for skeleton.

This is what the Overview help patch looks like as of version 0.1.13-alpha:

An here is the current nodeset:

image

Go ahead and give it a spin, you can intall via nuget following the instructions here.

We have added help files with an overview, a Skeleton usage example, and a Skia Depth PointCloud example as well. Make sure to check the examples out at:

[Documents]\vvvv\gamma-preview\Packages\VL.Devices.Kinect2\help\Kinect2\General\

Cheers!

And here is a teaser of the PointCloud demo:

PointCloudTeaser

Big up to @tonfilm for all the help putting this demo together!

4 Likes

hey @ravazquez

this is awesome (and a great learning ressource)!
there’s one thing i miss though - the quaternion output for the “head” joint, which is always zero. the dx11 kinect-nodes have the same output, but the quaternion for the head can be retrieved via the “Face” node.

did you consider porting the face functionality as well? head orientation would be nice to have to know in which direction persons are looking.

Hi @motzi, glad you are enjoying it!

I did not know that head quaternion is always 0, I wonder why that is, will add it to my list.

In any case Face functionality will be implemented in the near future but I have other things at the top of my list at the moment. I will try to get this done sooner now that I know someone needs it :)

In any case thanks a lot for reporting and stay tuned.

1 Like

Perf meter is giving me 8fps, or 20 with the tab closed with the pointcloud viewer, is that normal?

@catweasel are you on the latest version?

This is all CPU so your machine might have something to do with it, but do play with the scale factor to improve perf at the cost of detail, also avoid any IOBoxes showing data as this will put the patch in Debug mode (which is why closing the tab helps).

Lastly, I am currently working on a PointCloud specific node which promises to be easier to use and hopefully more performant.

It is scaled by .42 as it is in the patch, its the repeat region that is sapping the cpu 100,000 clicks. I guess the issue is looping through all those points.

@catweasel I just updated a draft version of the new PointCloud node and help patch. Scaling and Mapping features are still WIP but performance should be better compared to previous solution. Please test version 0.1.16-alpha and report.

hi, is it possible to have the “grabbing” gesture available as well?
i tried to use the distance between the handtip and the tumb node, but it does not seem to have the same reliability than their grabbing from the SDK.

Are those values available by any chance?

Hi @clmns,

I am not sure if you are talking about the Gesture recognition section of the API or about the Hand tracking section of the API but short answer: No these are not yet implemented, they will come in a near future.

Of course you and anyone else is more than welcome to collaborate to the repository/libray in the meantime to get it done faster.

Cheers.

hi @ravazquez,

i am talking about the hand state, which is not really a gesture, so i guess it is part of the Hand tracking API.

in VVVV and i think in their SDK there are 5 states that can be distinguished for each hand:
open
closed
lasso (index finger only)
unkown
not tracked

although the data is super bad, it would still be nice to have it in VL as well…
I will check your code and their c# code where it is used, no gurantees though :>

greetings,
clemens

Quick update: (and heads up for @motzi and @clmns!)

Version 0.1.31-alpha now includes support for the Face API (no Face HD just yet).

Summary

A help patch showing how to use this feature was also included at:

VL.Devices.Kinect2\help\Kinect2\General\HowTo Track Faces and Extract Features.vl

In order for this to work you need to copy the NuiDatabase directory that ships with the Kinect SDK from

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0\ExtensionSDKs\Microsoft.Kinect.Face\2.0\Redist\CommonConfiguration\x64

to

...\Documents\vvvv\[your-gamma-directory]\Packages\Microsoft.Kinect.Face.x64.2.0.1410.19000\lib\net45\

so that it sits right next to Microsoft.Kinect.Face.dll like so:

image

This will probably be reworked in the future so that the copying is automated.

Everything should work out of the box after that. If you see the RGB image but no face detection, try to move so that your full upper body fits in the frame (at least for initial detection).

Teaser:

image

Also, Version 0.1.32-alpha adds support for basic Hand state recognition.

Summary

Recognized states are Opened, Closed, Lasso, Not Tracked and Unknown.

Tracking certainty information is also provided.

New help patch showcasing the feature can be found at:

VL.Devices.Kinect2\help\Kinect2\General\HowTo Work with Hand data.vl

Huge thanks to Elias for all the patience and help!

As usual, please test and report!

Cheers!

4 Likes

Update regarding Face nodes: the newest versions of the nuget now ship with NuiDatabase directory meaning you no longer need to do any extra steps for Face recognition to work.

A simple nuget install VL.Devices.Kinect2 -prerelease should leave you with a working environment out of the box.

If you tested Face functionality with previous versions it is recommended to clean up your vvvv’s packages removing any Kinect Face directories and or NuiDatabase directories.

Cheers.

Quick update:

The nodeset has introduced FaceHD support (prone to changes in the near future).

Also the Skeleton node’s Joint data is now a data type of its own, for an example on how to work with it see the “Work with Skeleton Data” help patch.

Cheers.

3 Likes

Another little update (version 0.1.44 which will require a recent VL >= 2019.1-0321): Internals have been simplified thanks to new resources nodes and the point cloud is now allocation free. Check the help patch and see Skia rendering >50K points without any hickups (finally).

3 Likes