I am working on the software for a multitouch input, an i am trying to improve upon some of the already written patch. first, i am trying to lower cpu usage, and second i am trying to use my screen with the lights on.
This results in my having to come up with a new way to remove the background and detect the blobs. I have experimented with different free form tools such as colortracker,detectobject,fiducial, and camshifttracker. I cannot get these to recognize and id multiple objects. Is this possible with any of them. I have tried contour, and even the updated contour module, but it still does not suit my needs.
when the lights are on, there is a noticable difference in the intensity/ whiteness of the blobs from the background light, but everything still comes up white.
Please list some mods/hacks/alternatives that i can look into. i am interested in possibly using pixel shaders if they would help too.
i am trying to lower cpu usage
again: i don’t think there is a real alternative to using the contournode (please proove me wrong), which is actually quite good for multiblob tracking (are we missing something there?). are you using a dualcore pc yet? that should help because vvvv would automatically do the tracking-part on one core and the rendering on the other. also try to reduce the size of the captured video to as low as your fingers are still detected. 320x240 could do.
i am trying to use my screen with the lights on.
are you already using infrared lights for the reflection on the fingers? then make sure to equip all other lights in your rooom with IR-blocking filters. the same which you should have placed in front of your projector already. also your camera should have an IR-only-pass filter so that it mainly sees the IR light being reflected by your fingers. that is the standard way to do tracking in difficult lighting situations.
did you consider using the reactivision software by the reactable guys: reactable software for tracking and then feeding the tracking data to vvvv via osc? thats a standalone app just doing the tracking. i didnt really look into reactivision, but i would suppose, you*ll have a couple of more options there…
that’s the technology i have used to build my device, and yes i have read the paper. I get the blobs, without any issues. Now i am working on the software. that is what i need the help with. Fiducials are not the same thing as simple blob tracking. they are special images that when recognized, trigger certain actions. this is what reactivision does. I need to track multiple blobs.
ok. you’re right. seems like multi touch finger tracking isn’t natively supported yet:
“For the multi-touch finger tracking use the small finger stickers from the file “finger.pdf”. Please note that the finger tracking is only available with the default amoeba set. Future versions of reacTIVision will support plain finger tracking without the need of these finger stickers.”
i remembered reading something about finger tracking, but obviously i messed it up.
i think joreg summed up the state of the art in his post above: Contour (and none of the others) is the multipoint detection node in vvvv. Many people have used it sucessfully.
You did not elaborate on the problems you encountered with the contour node - what kind of issues do you need to solve? (cpu/gpu things were discussed in some another threads recently, right?)
please post camera screen shots / example patches, to allow us give more specific advise.
my problem is that contour is very cpu intensive. i have tried that latest version, but it still takes a lot of cpu. I would like to define a minimum and maximum blob size for detection, along with a specific threshold for detection.