Simple video playback / video capture using EmguCV in VVVV
Options are open for full replacement of DirectShow, including spreadable video playback, CV tasks, access to images in dynamic plugins, CV in dynamic plugins.
please check readme on github / download from github if you want to play around:
Brilliant! Here s my test, seems like it is dragging some extra ticks but the Videotexture jumps often at over 40, I am going to use your plugin for my next video performace rehearsal and report back.
The only problem I see right now is that vvvv.exe won t close properly when I use it, I have to turn it off from the Task Manager.
I’m actually working on the spreading at the moment.
I wouldn’t suggest that this is even ready for testing yet, but hopefully will get some minds thinking / looking at what’s possible with this / which direction this should go in.
Currently working on the frame locking between nodes
(e.g. VideoIn node uses thread to produce frames, and AsTexture reads back those frames in the main thread, so need to lock the processes properly)
Then there’s the details of correctly initialising / destroying / reformatting when you change file selections / capture sources.
Had some good results so far, and things are cleaning up quick. I think we’re only a few hours of attentive work away from a reliable video player. but i definitely wouldn’t suggest thinking about testing this for reliability right now.
But lets open up the development process! (that’s what github’s for :))
Concerning license. I’m consdidering buying one here for EmguCV, then I could distribute EmguCV utilising plugins and you could use them without GPL restrictions, but if you wanted to write any of your own EmguCV code, then you’d need to buy a license.
Latest commit is fairly stable at spreaded videos / spreaded capture.
Capture ID is definitely sporadic, and likely a quick fix inside the node will fix it, but not looking at that right now.
Please read the readme on github for full notes on this effort
Threaded - threaded capture, threaded processing (all processing in one thread)
Background - threaded capture, threaded processing (all processing nodes have their own thread)
Very Immediate = 0 frame latency, but vvvv fps can be locked to capture fps
Immediate = 0 to 1 frame latency, vvvv fps not hindered by capture fps
Threaded = 0 to N frame latency, all processing must run on 1 core (happens inside the capture’s thread)
Background = 0 to N frame latency, processing can be shared across cores
I think it makes sense for the capture node to decide this option for the graph beneath it (not sure right now what happens if we have a node further down that takes 2 seperate inputs from different captures set to different options. i presume most immediate will take precedence and that immediacy will be carried down the graph).
For structured light I’d choose Very immediate
For face tracking with not so much processing I might choose Immediate
For face tracking with lots of processing I might choose Threaded.
It’s also possible that a node Immediacy (EmguCV) could accept frames, and change the immediacy mid-graph.
Had a play with working on a nice back end system for it, only went trough the subsystem but the idea was to use AddonFactory + Hde nodes to deal with threading.
Roughly filters implements a separate interface, like IImageFilter/IImageSource (so they should not deal with threading/sync, and technically they know nothing about vvvv).
AddonFactory would register Node infos/Pins, and use Hde features to build a subgraph (which runs on it’s own thread), with a node holder to wrap filters.
Advantage of this is any improvement to the node holder/graph is immediately benefiting all filters.
Made some working prototypes for that concept, AddonFactory is fairly complete for this, but Hde was missing a few features (mainly Pin Connected/Disconnected Event), Pin Direction, and global graph events, to make it reliable enough.
As datatype Filters have:
Streams : Roughly it’s an input/output image, it is passed on the separate thread.
Parameters (vvvv input pins like value/string), which are synced to the subgraph
Output Data (same as above but output, things like contour data for example)
IImageFilter interface has the following methods:
Evaluation model (keeping that for later, enough to do already).