Make your blob tracking faster

hello, while i was searching for camera tracking, i had an idea. why don’t we do some prior image processing and then try to track that image. the rationale behind this is that when you have a say 320x240 capture, computer has to deal with 76800 pixels in 0.033 seconds. Now I know that there are algorithms like color and area thresholding, but still cpu has to process all those pixels and then decide which lies inside the threshold and which does not. with prior image processing we can lower that to around 1280 pixels which I believe is enough for blob tracking.

Anyway, my suggestion is like this: Capture Image -> Median -> Pixelate ->…usual blob tracking, etc.
here are the reference images.

great software btw. i hope DX10 gets supported for realtime ambient occlusion and soft shadows…

oops didnt load the image… here it is

images.jpg (51.2 kB)

because i think this is a good idea, I want to patch it and share with you the results. However, I don’t know how to process images in VVVV. Are there any library or modules for this stuff? (median and pixelate was done in photoshop)

helo buraque,

there are two ways to prepare your video image. both are good for certain setups:

  • the cpu way:
    write a freeframe videofilter that does all your desired operations and put it between VideoIn and the tracker node.

  • the gpu way:
    if for some reason you have your video as a texture before you do the tracking you may want to just do some pixelshading to achieve the preprocessing. i have prepared a little effect that does some morphological filters here

thanks jorge. great software, great community. i think i will go with the shader approach as it is easier to learn HLSL than C++(i hate it :) Also, Nvidia has a great shader library so that we don’t have to reinvent the wheel. I will post you my results and we will all see if it is a good idea or just cr…

today i realized that it will take a long time for me to learn shader programming and try these ideas. I would like to know if anyone is interested in writing a shader and patch a blob tracker for me for a fee. I can pay through PayPal. Please let me know howmuch would you like? If the algorithm proves succesful, I will still share it with the community.

Note: What I want is: A shader or a Freeframe plug-in that will do some prior image processing (median+pixellate) and a patch that will get the processed image feed, will locate a blob based on the number and the color average of pixels and it will decide the motion of this blob based on the image difference.

are you sure the median->pixelate steps will give you better results than simply captuing at lower resolutions?

hi max. yours was a good idea as well, and I tried it. Unfortunatley, there are still so many pixels to compute. My solution gives the cpu 60 times less pixels to deal with, so it should be substantially faster or more precise. Of course, I realize that I should change the camshifttracker a bit or even patch my own blob tracker to enable the cpu realize those big pixels as one pixel and not process small real-size pixels inside of those bigger ones.

Anyway, I have another idea in order not to deal with shaders for now :) Are there any modules or patches you guys have written so that I can reach the video pixels and manipulate them as I like. So far I guess my only 2 options to deal with video pixels within VVVV is one: pixel shaders and two: freeframe source code.
I mean is there a way to reach to my pixels as in Processing?

well, you can just use the ff sample code from here. it should compile imediately with the code::blocks IDE. you will find the pixels in the code, from there on, its pretty much the same than in java…