sorrily there is no documentation available for how vvvv works internally. so i’ll just try to get close to what happens inside in a brief form.
first of all i think there is no way to say how visual programming languages work in general.
i can imagine several different approaches for designing such a language…
one possibility would be to define a very basic, lowlevel messaging system which describes how nodes can communicate with each other. every node then could be programmed in a way that it would understand some standard messages and some special messages. after receiving a message it would try to understand the message, evaluate itself and send messages to the nodes connected to the outputs of the node. the system would base upon some initial events (in the upper part of the graph) triggering a chain reaction which would lead to evaluating all the nodes below. so this would be an event/message based language. nearly all of the executed code is written inside the resp. nodes. examples are pd & max.
another possibility to define a visual programming language would be by just defining abstract interfaces between nodes (or pins). after putting together the graph by the user the system would generate some executable code outof the abstract graph. so this would be a more functional approach where you just deal with the functions or objects but not with data at all. this can be useful when putting together a shader program which doesn’t run on the cpu and because of this reason it makes no sense to inspect the data in real time. softimage xsi has some built in graphical languages for defining relations between objects or single shader functions.
vvvv is based on another approach which pays attention to defined states of inspectable data changing only from frame to frame. the main assumption we made is that you want to have defined/correct/completely evaluated frames in time where all the input data is transformed into consistent output data. the idea behind this language design is that there are only some points in time where the graph is evaluated, but when it is evaluated all nodes are virtually evaluated at this same point in time. so there is no difference if the “left” or the “right” part of a patch is evaulted first (e.g. a timebased filter will evaluate itself according to the certain frame time). at the time when a new frame should be evaluated all nodes which output data (like renderes, midi out, audio out … nodes) will be called to evaluate themselves. these nodes will then call functions to validate their input pins before they really calculate themselves. but as a reaction on the input validation query the kernel will parse through the graph first and ask the nodes above to calculate themselves and to write their outputs. because these nodes will also first ask for valid input pins all the graph above will be calculated first before a node will be able to output consistent data for this frame. you can talk of a pull data based approach.
Thanks seb for the timely reply! Wow vvvv has a very interesting design - i thought of the first two but was confused trying to apply it to vvvv.
Does this mean vvvv has a global framerate or global frame loop that calls all nodes to evaluate themselves? What is this defined by - the renderer?
Im abit confused because then that would mean the renderer requests input from a node, which requests input from other nodes, but at the same time you have timer’s and input nodes which affect other nodes without them being connected to a node that is requesting input?
all nodes which output data will be called from within a global loop when a new frame should be calculated. but also nodes which you would normally find in the upper part or the middle of a patch can be called back from the main loop:
an input device node just to pull the input data from outof the hardware to be able to present the latest data when the next time somebody wants to know.
and also nodes which can be in several states, like a toggle. it also has to evaluate itself in each frame to ensure to keep the graph consistent even if not asked for evaluation in this frame.
so this believe that it is most important to keep all nodes synchronous also has the drawback that some nodes may be evaluated even if nothing has changed. the most complicated part in vvvvs kernel therefore is to know when things haven’t changed and therefore some cached data from the last frame can be used.
but still even when nodes are somehow asked for evaluation more than once in a frame (through some connected renderers, or maybe because they are called from within the mainloop directly, …) this algorithm works, because every node first asks for evaluation of its inputs. after evaluating itself it will implicetly stamp the outputs to be valid for this frame. so every node will get calculated only once in a frame.
the idea of shutting off branches of nodes could be pretty useful sometimes.
I don’t know, how S+H internally works, but in theory it could act like a “STOP” sign for everything above. If S+H doesn’t sample, there would be no need for nodes above (that are not connected to anything else) to be evaluated…
this is still in the pipeline and also one of the favorite development topics. however it has some tricky details, yes.
the latest idea to this topic is that in this “stopped” state the disabled graph would behave like it wouldn’t be there at all. however since it doesn’t keep up with the rest of the graph some inconsistencies could arise from that.
however apart from some maybe problems within the core also the gui should be enhanced to be able to automize the playing, pausing, stopping of selected areas within the patch. therefore you need something like permanent selections which also should have pins for controlling or for reacting on state changes (stopped->playing). and these areas should also be able to be saved in patches…
another approach would be to do this also on patch basis, so that
only whole patches can be stopped. but maybe this could fragmentize your project into more unwanted subpatches.
or the problem could be solved on a node basis where you would have nodes which stop the evaluation of the graph on top of those nodes. however the graphical representation of stopped branches is missing in this case.
so there are some questions to be solved.
but besides this there are some other ideas still present which could speed up the graph furthermore without the need for the user to explicetly disable parts of the graph.
e.g. how nodes could be programmed in a way that they could predict if their output would change depending on changes of their inputs without the necessity to calculate themselves. for this all nodes would need to be able to give a correct answer to the question in which cases their output will change. after this only the changing nodes will be calculated.
hey gregsn, i havn’t looked in the code for this, but for value/string based nodes this should be easy! why not give the basic node class an enabled pin (which is hidden by default)? its an idea thats in my mind since i worked on a patch mixer solution. how to do that with ex9 nodes is another question, i know…
simple on/off pin at each node will cause messy patches. thats why i prefer the ‘tree idea’.
thinking about having a ‘stop node’ at the beginning of a tree is probably not enough. maybe one needs to define the end of the tree with another node. i imagine a case where u have several objects grouped to a renderer and u want to disable only some of them and still want the renderer to work. but start and end nodes sounds like a weird interface…mmmh
Is this not what real switch does though? If the node switch could
have a real switch option, would that do the job, with an empty node on an input you would get a bypass this tree switch?
I implimented real switch into my meg patch, switching a host of transforms, it was pretty awkward as they were buried way deep, so I had to s them to the root and switch them there then connect nodes all the way back to where they were patched, it works but its a bit messy!
Bypassed transforms are all ticking a 0.1 versus 1.1-2 that they were with setting spreads to 0, seeing as there are 20 (and growing) and these occur in each “layer” of which there are currently 3, so it saves quite a few ticks!
The biggest performance hit I’ve got at the moment is chaining ex renderers, ie off screen renderering fx etc, they almost half my frame rate in some cases, I’d love to see multiple passes in fx files!
@cat: basically this leads to the discussion how often you want to enable/disable different parts in the graph. vvvv basically has two design strategies: the internal graph which is optimized to be really fast as soon as it is set up, and the graph building mechanism, which is optimized for flexibility and is not really optimized for speed.
so if you would use enos real-switch to replace a real switch and switch that every frame, performance would be seriously drop. so both strategies are valid, its just the question whether they should use the same user interface or not.
anyway the best option for users would be having the graph automatically know what needs to be calculated and what not. as gregsn explained, we see quite some possibilities to eliminate superfluous calculations, which could be done completely automatically - i agree this should be one of the next big steps.
@u7: the problem with trees is obviously that you can and will join things again upstream - which will make things difficult to trace (when not having the potentally performance hungry visual highlighting mechanism) - so a visually simple model with one-enabled-per-patch or permanent selections might be preferrable. but your solution might be much easier to implement…
i don’t mind the slow graph building mechanism. enabling/disabling parts of the patch would just be a convenience for my way of working. this feature would only be used for major changes in the patch.
and there are still ways of hiding such a performance drop with prepared content.
I havent been active on vvvv for awhile because i switched to osx, lately my interest has peaked in visual programming languages, ive been reading up on data flow language that are similar to vvvv, but they are not the same.
vvvv’s very different as its running always and you just go for it.
I was wondering if anyone could please suggest further reading on the subject, i am really interested on the internal workings of how these languages work and how they are implemented