I have two trautner patches running in the one patch. With the video coming from the videoin node. (video+preview).
there’s two problems.
I’m using the two trautners to track and x+y position for the mouse.
When I fullscreen my actual render window, and come out of it, the trautner stops working! does anyone know why?
another thing is when I start it again, by changing to a different driver and then back again, the two mask images that are loaded into the trautners are the same. i.e they don’t retain the original images.
this happens when I open the trautner patch on it’s own as well.
any one have any ideas how i can fix these two problems? thanks!
When I fullscreen my actual render window, and come out of it, the trautner stops working! does anyone know why?
that happen to me when my computer is to close of the action area and the hold bacground is on 1 ,so when i make fullscreen the background changes , i solved this by using the keyboard to toggle the the hold background , that might not be your case .
attached is my patch. I want to get high resolution hand position tracking on x+y, but as you probably know, trautner only takes 256 “shades” of grey leaving me only a resolution of 16x16.
as you can see in my patch, on load, both trautners load the same masks… strange.
I’m open to other suggestions on how I can do hand tracking.
I’ve looked into using contour tracking but I can’t seem to get good results because my interaction area is a 6foot X 6foot area, resulting in the user moving around a lot.
propably the best way to go for high resolution hgand tracking would be to modify the trautner freeframe code and compile it with the new values (like more detection areas).
I wouldnt use the preview pin of videoIn.node as its for preview only. It drops a lot of frames and if there is high computing needs, than it can stop completely.
Why dont you use Contour tracking instead? You get your centre of hand and you will have better resolution.
finding a good algorithm and coding it as a freeframe plugin in c or c++ would be the way to go here if you want to have PS3-class tracking.
if you dont want to go for a freeframe plugin, a good approach for improving the tracking is usally using a shader to prepare (undistort, threshold) the camera image. particularily useful might be preparing a Queue of Textures with video frames and then using a shader with -say- 6 texture inputs to colorize only the pixels with many changes in the last 6 video frames (e.g. outputting something like the maximum minus the minimum color in these 6 frames)
with a preprocessed image like this it should be possible to use Pipet instead of misusing Trautner to work like a 16x16 Pipet.
here is a patch I did for myself to check tracking with shaders and pipet. Its not too far yet and has some inaccurancies at borders, but could give you an idea how to go.
For getting foreground out of your image there are many ways and it strongly depends on your surroundings amd purposes.
Oschatz’s suggestion is good for changing light conditions, but you will get problems if the hand is not moving, like all frame difference approaches have this problem.
For static background with not too much light changing, you get good results with BackgroundSubstraction shader.
At the moment, Im working on an adaptive background substraction for changing lighting conditions, but not finished yet. It needs some time, but I will post and upload it, if Im ready.
edit: actually this adaptive background is working like oschatz said. To minimum and maximum, add the maximum difference over last frames and let it always start, if there is no movement. I just read “last 6 frames” and thought its frame difference stuff. Im writing faster, then my mind…
thanks for the reply. I don’t really get what your patch is showing me. I see that you can move the point around and the green quad is following and I see everything going into the pipet.