Tracking people in a large space

So whats the latest in tracking people? Its a large area so not kinect, night time so IR based, is there a ML solution? Been a while since I last did tracking for more than hit areas!

hey,
definitely this :


you can track up to 15 people
but you will need a dual 1080ti for 18 fps at full resolution :)

i’ve managed to lower the setting and then track 6-8 people at 12 fps with a 1070,
light conditions for the camera are important and may impact the tracking and performances!

2 Likes

Ah those are more skeleton tracking, I’m looking for just positions of people in a space, kind of like cctv, centroids of people.

never tried but there is this:
http://openptrack.org/

with a related thread here: Upd connection to OpenPTrack server
(note though, that the patch i posted there will not work with recent alphas)

I think you need to use 3d sensors for that, I need a large space approx 20m x 20m, and there’s not much scope for multiple cameras. From what I can find openCV looks like the main way, I just thought societies paranoia would have come up with a better tracker by now, where my CSI damnit!

“about” says “person detection from RGB/infrared/depth images”

Is this related?
Someone asked the same a couple of days ago on Facebook:

Ah, No, separate enquiry! I’ll have a look on crackbook… (ah not in the vvvv group page)
Looks like image packs has a detect pedestrian node, but it crashes (hangs) vvvv as soon as I connect it, or open the help patch.
OK I’ve got it running now, the missing images in the help patch made it crash, but when I got a valid input, it also took many minutes to initialise. Testing continues…

Ah no, it’s that real time now generative art whatever buzzword group.

1 Like

how large is the area you want to observe?

I wouldn’t be surprised if there’s more affordable LIDAR solutions now since self driving cars. I’ve seen <$300 LIDAR recently, but that was about 5m range only .

ML has some much faster detection and classification algorithms compared to haar detection (eg check videos of Darknet). But if you just want blobs , then contour tracking will be faster (if you have the data)

Or just wait until everybody is in VR?

How big an area? I use multiple Kinect2s to cover larger areas. The K2 gives depth data out to 8M. By combining the pointclouds and then doing grouping you can track an arbitrary number of people. Place them up a bit above people’s heads and you get good occlusion avoidance.

I’ve just started using ZED stereo vision cameras on a new project, as they work out much farther (they say 20M), but being vision-based it needs light. I am going to ask them about an IR version though for other stuff. I also talked to some industrial camera folks about that, but their software is not as good as the ZED.



1 Like

ooh ooh

also i developed a system called MultiTrack which can stream multiple Kinect2’s into one computer with combined mesh, skeleton tracking, colour images, etc
It’s all open source and really nifty, but still a bit tricky to setup
let me know if you need it!

1 Like

Hey Elliot, I’d love to see that - I’ve got an OpenNI2 plugin working with libfreenect2 that works great with four K2s on one PC (maybe more, haven’t tried it). I’m just doing pointcloud combining, would be great to add the others you are doing, and/or add the multi-K2 support to yours. I’ve almost got it to contrib-readiness… being OpenNI2 it also handles multiple Orrbecs and Xtion2s, etc.

1 Like

next pitch for a client requesting AR/VR stuff I’ll say this to them: “can we actually wait until tapping into the visual nerves will be a common thing to do? it would be so much simpler than what technology can currently offer.”

Elliot that does sound interesting, kinects aren’t good for this job, can’t have poles, outside, too big an area, it has to be a 2d tracking, really. I keep seeing all these cctv videos with boxes around faces/bodies in crowds, I want! Contour is always quite noisey in anything less that a perfect environment, unless you doing a motion based track.

with dx11.particles it is really easy to align many kinect2s and stream their data to one server. there are girlpower & help patches for the networking and calibration stuff. so all you have to do is a bit copy/paste to build a working setup.

if you only want to stream user-centroids or blobs there is practically no limit for the number of used kinects. if you want to stream (unfiltered) depth images you are limited by your network bandwidth.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.