Can we make vvvv THINK?

vvvv could solve problems!

The process of problem solving is very tightly bounded to restructuring of cognitive schemas. Restructuring means the connections between schemas are dissolved and new connections are established.

vvvv nodes are exactly like schemas. That’s why vvvv is so easy to understand for non-programmers. (node = cognitive schema)

When the output of your patch is not what you expected, you restructure: you try to link them in a different pattern, or insert new nodes.

We are very close to make vvvv ABLE TO SOLVE PROBLEMS!

A new vvvv feature is needed that resets and rebuilds in a new way the connections of the nodes. Repeatedly. Randomly. Until the desired result is reached. See [](trial-error learning )

Example: Let’s say that I search for the linear equation’s (ax+b=y) solution. I know how to calculate y from a,b and x, but what is the situation if I look for x. I know a,b and y.

There is one more information available, that the relation between a,b, and y comes from the 4 basic operations: +,-,*,/.

I made a (sub-)patch containing the needed nodes (+,-,*,/), and start to try to link them so, that the output of this sub-patch should be equal to x.

I give them the initial values a=2, b=4, y=10 and I start restructuring the links, expecting that the sub-patch should give x=3

As a double-check, when I get the same output as x, I try with different numbers. a=3, b=7, x=5, y=22

Video presentation of the theory:

Suggested solution:

  • a table is needed which contains the links (incl. source and destination pin id). I have found these information in the xml file, but changing them does not result changes in the patch’s behavior: you have to save it first and maybe reload it again…

  • a feature should be implemented that checks which pins can be connected with which. This might be a filter of the above mentioned table, in order to not connect things that have different datatype.

  • and the last thing that should be created is a “testing routine”, that checks whether the result is what we expect. Maybe it feeds the whole patch with different inputs (e.g. spread) to test whether the output is according to the expected output spread. It continues restructuring until the desired internal structure of the patch has emerged.

The main patch managing the search (contains the trial values (a,b,y) and the expected output (x)) (4.3 kB)
Example for a randomly created links that deliver wrong output (4.3 kB)
The solution: after trail cycles finally the patch’s internal structure arise so, that the output is what we expected (4.7 kB)
Finally the tester doesn’t give a command for further restructuring, as the expected value equals to the output (4.3 kB)

Sorry I can’t reopen this topic, so I better continue here.

6 years passed, the AI and neural network topic became hot.
I still would like to use vvvv’s power to model the human thinking.

My results: I managed to make an alpha build that solves searches for a solution in the whole search space using random walk. It is presented via a simple equation’s solution. The result is suprising and showing that we might get something unexpected if we ask a computer to think.

I’m just curious if anybody would be interested in a development like this.

collector.v4p (19.6 KB)
integrator.v4p (21.1 KB)
ToRestruct.v4p (59.6 KB)

hi fodormik,

you’re talking about genetic programming, what is a huge and interesting field.

for now (without opening your patches) i think you can solve your puzzle with nodes like SetPatch (VVVV), GetPatch (VVVV), PatchAlias (VVVV) etc.

i’m not sure if you can find out matching/possible links between nodes, but at least even mismatching links won’t be created by vvvv and therefore it doesn’t crash.
best would be a huge list containing all nodes’ pins and datatypes - i’m not sure that list exists publicly.

if you’re getting serious into gp i’d suggest you to use one of the existing frameworks mentioned in the wiki-article above (perhaps a c# one that could fit into a plugin).

hey this is an amazing topic. vvvv can easily be compared with neural networks.

other application then actually changing patch structure would be to implement weighting of node connections, so that they can be iteratively fitted (via approximation)to the wished outcome.

something more about weighting of neural networks for example here:

but it’s definitely no easy thing to do!