I’m doing an art installation where a camera from above tracks people that are walking. Each person will represent a “blob”. Things will happen when two people touch (when their blobs touch); for example, the floor might turn blue when two people’s blobs touch, etc.
My main problem is getting Contour to memorize individual blobs so that when two blobs touch, Contour still sees the new combined blob as two separate joined blobs, not one singular new blob.
Any thoughts on how to go about this? I’d prefer not to attach LED trackers on the people. I’ve also dabbled a little bit with Jitter. Any suggestions welcome!
TSPS is maybe usefull here…from james george…there is also an example for using it via osc in vvvv.
The easiest solution is using age and position.
if to older contours (higher age value for contour) disappear where a new contour (low age) appears - then this new contour is a conjunction of the older ones…however its definitly a calibration and tweaking thing…
Hey princemio, thanks for the suggestion of TSPS. It’s a really awesome software.
I’ve been playing around with TSPS a little, and wouldn’t calibrating the maximum blob size make more sense? In other words, decrease the maximum blob size just enough so that it doesn’t detect a bigger blob (made of two smaller ones).
hmm not sure…wouldnt this lead to you loosing the big blob?
That’s what I want, to lose the big blob