i’m posting again on the same topic since i’m pretty much stuck and i could really use some help.
The installation has a camera on top and I’m using tbeta to get x,y coordinates for every person walking the room.
the problem is that i can’t fine tune the camera settings to get one single x,y value for each person. instead i’m getting 2 or more blob positions for each person.(lets call this a point cloud). How can i get one single x,y value for every pointcloud based on a threshold of distance.?
i patched something with the intersect(Spreads sets)
but is working correctly for up to two point clouds.
Hope i’m understood. please help (here is the patch)
quite a hard nut. here another patching approach:
it calculates the distances between each points, if distance smaller bla then merge and take the average.
problem is a line of points, where the first and the last don’t fit into the threshhold… you’ll see
wow woei.it would take me a month to patch that.
indeed it acts a little weird on some occasions but i’m still to figure out exactly how it works. i’m starting to think that maybe a unify(2d) node (or something) would be useful in tracking situation.
for example a multitouch where you can have one type of interactivity for the fingers and one for the whole hand.
anyway thanks a lot you guys for your time and efforts.