Hello, I am developing a project using kinect2 and its gesture detecting node. Everything work great. However, it only can detect the gesture of single user. Besides, if one more player walk in front of the kniect camera, the system select either player A or player B to be one who can interact with the application. The logic of selecting is unknown. The player being detected seems to be randomly selected. It seems not related to the distance from the kinect, neither the order of appearance.
What is the logic to select the detected player? If Kinect2 gesture detect more players?
refer to Kinect2 node: [OBSOLETE, see DX11 Pack] Kinect2 Nodes
Try Prepose node which accepts Tracking ID (get it from Skeleton, User index output, that allows you to select which users gesture will be tracked - closest to the camera, most active, most or least recent), connect GestureStatus to Prepose node.
Node Gesture will track last tracked ID found by skeleton tracker. Edit/compile the node if you need a different behavior. Replace the line where found id is assigned with a value from a new input Pin accepting the desired skeleton ID. Side note: tracked IDs can be pretty inconsistent, with many people in front of the sensor, user ID may change after the occlusion. maybe you can reintroduce consistency by identifying people by their bone length ratios.
for (int i = 0; i < this.lastframe.Length; i++)
found = this.lastframe[i].TrackingId;
if (found > 0)
this.vgbFrameSource.TrackingId = found;
There are dozens of useful videos for developers using Kinect, i remember there is one that explains different design approaches in tracked user selection.
Finally, let me recommend you Kinect Studio, either as an application or as a nodes, while creating interaction for many users, it’s almost inevitable that you record example sessions instead of having six people in front of camera all the time.
Sorry for delayed response. I had put down this kinect project for a while, and only give it a try this week. I followed your instructions but encounter problems:
(1) I have downloaded the dx11-vvvv sln, and I modified and created a new version of KinectGestureNode.cs. It was complied without problem. However, vvvv cannot read my new node. Any step I have missed?
(2) What should I feed the “pose file” pin of Prepose node? What is it? What data it supposes to be accepted? I read the code in KinectPreposeGestureNode.cs but cannot get it:
@circuitb that is an interesting paper, surprisingly complex. i guess it can be a deep rabbit hole.
i would like to know if kinect skeleton tracking already does use bone lengths to keep track of user id, and if so, why it is not very successful in keeping the id consistent.
calculating the bone lengths from skeleton data is be straight forward, classification could be done with machine learning pack. maybe even simple sum of deviations from average bone lengths would be enough to keep id consistent.