I’ve been testing Trautners “Hold Background” feature and it seems to be using a wierd algorithm. As far as I can see, it will only discover a change in the image when the change is brighter than the image you are holding. This means that if I have a black wall and walk in front of it with a white shirt, Trautner will detect it. If I have a white wall and walk in front of it with a black shirt, it will not.
It seems like the algorithm it is using is something like:
For instance if a pixel in the held image has a value of 30 and a new pixel in a frame has a value of 90, the difference is shown as 90-30 = 60.
However, if I have a pixel in the held image with a value of 90 and have a new frame with a value of 30, the difference will be 30-90 = -60 (= 0 in a range of 0-255). This means that no output will be shown, even though the actual change is 60 as in the first example.
To me, it would make more sense if you used something like:
Meaning that you use the numerical value of the difference so you cannot have a negative value resulting in 0. That way you would detect differences, not just when the the brightness of a pixel increases but also when it decreases.
I hope I have explained it good enough, otherwise just ask. Does anybody know how to change this? It shouldn’t be too hard. :)
there is already a new version of trautner online where i introduced a new “darkwall” pin. download here.
so far you can only decide between dark and bright background. it won’t work with both in one image. the download includes the sourcecode so you can see what it does. the code could be easily adapted to your method but wouold probably be slightly slower…
I have no idea how to change it though as I’m not much into OpenCV and it seems to be using a specific function to subtact the matrices. If somebody feel they have the time to do what I want, I would appreciate it, otherwise I can live with it as it is. :)
as far as I understand you, you want to see every changes in pixelvalue. So, changes lower and higher than the stored pic value.
But why? Because you will always have output. Probably like a difference filter. Its like combining black wall and white wall mode. You will have no output only, when its exact the same value.
This tracking stuff works with luminance. In order to get a reasonable output, you need a contrast between background and foreground. You can reach this by illuminating the background brighter/darker than the tracking region.
The reason I want it is for when you work with real applications, e.g. installations, where you might not have complete control of the background, foreground or both. Of course you can always make it work by setting up a completely white or black background and have control of what you put in front of it.
You have a static background with both dark and bright areas, patterns and other stuff.
You need to track people with both dark and bright clothes, walking in front of those dark and bright areas.
(for instance, try using Trautner in a room with a lot of furniture and normal clothing, tracking yourself - and you will see what I mean)
The way Trautner works right now makes this a bit hard to work with as a person wearing dark clothes in front of a bright area can actually be invisible to the tracking. To me, it doesn’t really make sense why it should work like this, except for maybe calculation speed. I’d rather have the numerical difference so you “combine” the bright and dark wall mode, along with a threshold for filtering out small changes.
I’m just saying that I think this would make Trautner a lot more flexible and give much better results in environments that are not easy to control. I’m actually pretty sure this is exactly how it works in Eyesweb. I hope my explanation makes sense. :)
helo alj, your arguments make perfect sense and i have only not done it that way because i didn’t need it so far and i was hoping for better performance, which i haven’t even veryfied. so please feel free to modify the code and post your changes.
Hehe, unfortunately I don’t have the knowledge about OpenCV to actually do it myself. I can live with it as it is now, I just thought it would be nice to have when I use Trautner as I like vvvv much more than Eyesweb.
If somebody feel they have the time on their hands to do it, I thought it would be nice to have as an option. :)
finally, I tried your purpose and it looks good. Sadly, I can’t program OpenCV and Freeframe. But I added it to a background substraction shader I wrote some months ago. Its like Eyesweb backgrouns substraction. May be you can use this shader before giving its output to trautner.
Its on the user shader page, now.
It works just as I described! Running it as a shader means that it will use the GPU for the calculations, right? It seems like I have very low CPU usage while using it at least, freeing that up for other stuff.
doing things directly in the trautner source should be the faster solution. while shaders dont take any time if the gpu has enough transistors left, moving stuff from the cpu to the gpu and back to the cpu costs a lot of time.
Actually, it would also be a great help for other freeframe plugins. I’ve played around a bit with franks shader and it actually helps most of the freeframe plugins, especially since it outputs a color image.
I’m not sure of how he did it but especially color tracking works extremely well with background subtraction, since you for instance can actually detect red objects on a red background. The background is simply filtered out by the background subtraction and the red object you bring into view will be the only red thing on the screen, the rest will be pitch black. Color tracking is suddenly much more robust for tracking.
If Trautner could get to work this way it would be an amazing tool to use as a filter before all other freeframe plugins. I just wish I could program it. xD
ah, there you are…so what we’d actually need is not an optimized trautner but just a separate node that does better background substraction…without the rest of the trautner functionality, right? c’mon anybody, this should really be a thing a nice little exercice. no coders out there with a free timeslot?
Exactly, a new node would probably be a good idea, as it is moving a bit away from the idea of the Trautner node. If anybody had the knowledge to do frank’s shader in OpenCV/freeframe so you could place it in front of other freeframe nodes it would be quite a tool to have. :)