Kinect 2, crowd problems

hi all,

kinect v2 can track up to 2 skeletons and 6 people. I had this problem where people would engage with my installation, then move to the back, and new users would not be recognised. So I guess the kinect keeps tracking the people in the back until they leave. I guess this is a hardware limitation.

I could check with the pointcloud for “people” in front of the sensor and then reset the sensor?

cheers

hi @schlonzo, i wonder, if what you write is really the case - that the people in front are not detected. kinect should track six skeletons. does your patch select skeletons?

general way how to minimize this issue is to dedicate an area for iteration, but often people like to stay in group and interact as such.

you could maybe try to beam strong ir spots to areas where you do not want people to be detected, this could be expensive as kinect v2 does have robust filtering of ambient light.

reseting is interesting idea, i do not have data to test this, but maybe resetting the sensor, disabling the depth for a short moment. in past i had similar situations, mainly because i used only one skeleton for interaction and the logic to select correct skeleton just was not there - i used to swipe my hand in front of sensor to cover the depth sensor for a short while, this usually helped. maybe you can try to make mechanical shutter controlled by a servo if nothing else helps.

wow, ok this ir spot idea is interesting. I will give it a try when I have some time.

using the pointcloud or depthmap to do some kind of “background substraction” and resetting the sensor when something big inside the interaction area appears, should work.

Hi schlonzo, do you really need your skeletons for your interaction , if you could avoid skeleton and use only depth for your interaction you have a bit more freedom for selecting front or rear áreas.

I need smile detection and distance.
I consider switching to a normal camera.
Are there any good optical smile detections out there? I tried CV.Image with some haar cascades, but this worked rather messy…

Kinect HDFace is quite robust, and it gives you the state of user’s face. Probably it does not work over larger distances.
There is face expression detection sample in machine learning pack, working with x86 only and using FreeFrame. With high-res camera and appropriate optics, this may give you results you need. With FreeFrame it would not run realtime, so you may want to run this in a separate instance.

i recently did a website with the affectiva sdk for emotion recognition.
they have an sdk for unity as well though (which should be rather easy to port, or you build an unity app that sends the face info to vvvv)
smile detection works quite well in the web version, haven’t tested unity.
It does however require a license for commercial applications afaik

edit: actually, its free for open source projects and companies that have less revenue than 1 million
http://developer.affectiva.com/

@soriak Affectiva looks good. Compared to Kinect HDFace it’s less robust especially when face is not frontal, on the other hand it It provides plenty of information and the detection is realtime even on a FullHD stream.

Just compiled the C# sample, sharing the detection with VVVV results should not be difficult.

I’ve been also prototyping VVVV plugins for Vision API from Microsoft, aka project Oxford. It’s interesting, but cloud based - slow and eventually expensive for realtime interactivity projects.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.