i have been trying to use blob tracking with cameras for a little while now, and it seems like i have not been able to get around with a major problem working with cameras. That problem is that whenever I try to use some blob detection methods, readings from the cameras are constantly fluctuating that I cannot get a very stable results. No matter how much I tune the input from the camera, at the end I am still getting the blob counts to jump between values even though nothing is moving in front of the camera. As far as I can see, I think this is mainly because of the sensitivity of the camera, and the lighting condition not being contrasting enough.
Is there anything I can do in the software side to help with this problem or is the only way to solve this problem with better lighting condition and camera setup?
hi levvvvky,
you may find very usefull Kalle’s IIR module for smoothing detected values jumping every where, or you may use also a damper node to round and delay this sensitivity.
for me it works well with fiducials, that were presenting same problem.
wich node are you using for your tracking ?
you may also find vvvvery interresting this page, where users share their own usefull patches
are you sure, that you are using the right outputs? use ‘x’ and ‘y’, not ‘contours x’ and ‘contours y’ they output all points along the contour and will fluctuate a lot, because there is always noise in the video image. also set the ‘cleanse’ input to 1, it will do a slight picture smoothing…
i have just started playing around with vvvv. I have got the blob tracking part done, and now want to play around with different effects after i manged to find out where the blobs are.
thanks to tonfilm for sharing the wave generating effect. I wanted to use the blobs to generate the wave patterns, however I saw a significant decrease in the performance of my patch after putting the two together.
the perfmeter within tonfilm patch seems to indicate that i am using the CPU alot. I thought that I have already tried to use shaders to do most of the things, and the contour module is a freeframe plugin then almost everything within my patch should be done on the GPU then. I am not sure how else can I optimize my patch.
Anyone can shed some light as to what I am doing wrong?
and its quite intense patch so my recommendation would be, get another machine to do the tracking and send the coordinates via ethernet or midi. well, if you have some old pc lying around.
or with a multicore machine, let vvvv run twice, each instance running on a single core and exchange data via ethernet/localhost.
and btw. if you share patches, make sure you deliver them with all shaders and textures in a proper folder structure. your rar is kind of useless.
Okay, after playing your patch I found same performance decrease.
So, I found this: The bottleneck is in contour patch. You taking a videoin and putting it onot Graficcard using videotexture. There you are doing a shader for background and color/threshold. Then you grabbing it from graficcard by asvideo to CPU to run contour node.
You are shifting huge amount videodata CPU-GPU-CPU to do a threshold/color transformation only!
Another way would be to wait for half a year or so until I managed to understand freeframe/C++ and Ill make you a freeframe for that Currently, Im stuck in it. ;D
If you havent that patience (me wouldnt ;D ) A try is to reduce the size of both videotexture and asvideo to decrease videodata. See attached patch. Think you have to set size to fit best results.
Setting the Reference clock to none worked great. Thanks for the tip, joreg.
Exactly what does that do?
As for the moving of videodata from the CPU-GPU-CPU, I was trying to do everything with a shader as much as possible. I remember reading from somewhere saying that it is better to have smaller shaders stack together than having a big shader that does everything. I tried it and I was only able to find using AsVideo being the only method that I take the videotexture from one shader to another. Is there another way?
with AsVideo you convert a texture to a directshow video. directshow video tries to run at a defined framerate. 25fps are hardcoded into the AsVideo node (which is stupid of course). anyawy, setting reference clock to none the directshow graph does not try to run at a specified speed, but just tries to run as fast as possible.
hi levvvvky, hi all…
have you considered to use also ir illuminator and background subtraction before doing traking…
take a look in this thread where you can find some interesting consideration about camera traking
joreg: thanks the explanation. That was very helpful.
ales9000: yea I am using IR illuminators and background subtraction already. I am getting very decent blob tracking right now. just that I have been running into some other problems specifically the performance of my patch.
However to add to the whole IR tracking experience, I am using some 850nm IR LEDs with 3 layers of film negatives as filter on top of a unibrain firewire camera. I have to say that it works very well.