Hope someone would help me cause i have in hand a rocket science ;) :) :|
not so easy i fear…i’m trying to design an ineractive 2d-video projection in wich people are detected with contourfreeframe and points are floating in empty space.
when a floating point hits a detected contour it changes direction, somehow like in patternpong.Here the difference is that i have several balls and several “pads”.
i’had a look to patternpong’s patches and i found that i can send quadsX+Yvalues and contour’s pointsX+Yvalues as spreads in two differents points2vector modules respectively in x-y and x2-y2 pins to detect when a hit occours: when any value of the spread from length pin is 0. BUT how can i find out wich of the points hitted a contour’s point to change just his direction?
This is the first problem i need to find a solution for to go ahead…
I look forward to any suggestion to realize this real\virtual interactive environement that’s wishpering in my head…
I’m quite sure it is possible cause i guess something based on the same principle was done in MKH Future Ocean Exhibition pond installation…
I think the key would be checking that every floating object gets checked with every contour point. so if you have a spread of e.g. the X coordinates of your objects, and a spread of the X coordinates of the contour, use a Cross node to get a spread with combinations of both. then check the distances and accumulate the resulting forces on the objects with one of the spectral nodes (like Bounds (Spectral), + (Spectral) etc.). set the bin size equal to the Count of the contour spread, so that you will get some resulting force for each floating object.
this is funny, we had nearly the same idea for an installation for a party which is in 3 three weeks. we like to do a real pong game that can be played with the body. we gonna use ode physics for this task, because it does all the math out of the box… we have just one ball at the moment, but you can easily have as many balls you like as the engine does everything for you.
i allready tested how the physics behaves and it looks promising for our purposes. i post the patches when theire developed a bit further.
for this installation we once used a combination of
- ODE for collisions of the ball with static objects like balkonies, floor, elevator etc.
- and a handmade tracker for the collisions with the shadows of the people.
both algorithms could have an influence on the position of the ball, but the latter is probably the most interesting part. we didn’t use a freeframe filter for this purpose. first, we just projected the camera picture onto a grid from the same perspective like the real camera was looking at the real shadows to minized distortion and to be able to match ball position with 2d shadows. then we rendered this with probably some contrast & brightning corrections (probably over a simple pixelshader) into a texture.
now the interesting part was: we sampled this texture 9 times around the point where we assumed the ball (pipet node). depending on these 9 colors joreg patched a bounce behaviour. so the interesting part is that you only need to know the area around a ball to determine if and where it is bouncing to.
now in your case it could be clever to do all that in a pixelshader again. i’m not sure but i think it really could work out. at the end you should just need to sample into a texture and directly get a bounce-of-direction or new velocity-vector encoded in the texture.
i really think it should be possible to do those calculations in a pixelshader, which we once patched depending on the 9 colors around the ball position. as a result you would calculate the bounce-of-velocity or the 2d-normal of the moving shapes for each pixel and store that vector in the color of the pixels of a texture. after that only few texture lookups & calculations would have to be done in the patch/cpu. with an approach like this i think you could get a high amount of balls with still good performace…
last but not least: sure, maybe you need something to convert moving people into shadow-like shapes. and therefore still another freeframe or pixelshader could be necessary.
ah and also: if you want to react on the velocity of the movement of the people it could be necessary to write a pixelshader reacting on the last 2-5 frames of the camera. but maybe this isn’t necessary because it wouldn’t match the pong style physics behaviour?