Hi all

I’ve been working on a machine-learning library using Microsoft’s CNTK framework in VL.

CNTK pretty much plays completely happily in VL but to use it as a patcher its a bit of an up hill struggle, so I’m simplifying the nodes to make the experience as user-friendly, readable and informative as possible.

Here’s the goal:

To prepare a machine learning library that can be used

  1. to develop new and sophisticated algorithms for researchers, developers and creative experimenters and

  2. to drop-in easy to use nodes that carry out ‘smart’ actions without having to learn how to write a neural network. (You can look at something like Wekinator or Lobe https://lobe.ai/about for reference). imagine wanting to throw a style on a video for instance, you would just pass it through a VLML filter pre-made or of your own.

For the first step I’m having to write and wrap low-level functions so that they are easier to use and understand or ignore entirely. In order to streamline the process I broke down the key parts of how to make a graph:

Inputs, Network, Error/Loss Evaluation and Training*.

A picture says a thousand words so here’s an example of how the development is playing out.

Before: Using mostly original nodes

After: Current Node design.

This is an example of training a neural network to solve XOR (that is: when the inputs are the same return false. When the inputs are different, return true). The picture above shows the nodes required to do this operation with the near-raw CNTK node set, and the next image is the developed nodes that can simplify the process.

A key focus of this effort has been on the idea that the visual representation of the network may make it easier to follow how an the algorithm is produced. With this in mind there is an implementation of debugging on the network which allows users to inspect the network data on the pins of the network nodes themselves. Already in my testing I have been able to fix mistakes by simply checking the shape of the output of the nodes.

The node set itself is at a healthy state, but many of the higher level function I would like to add are not there yet. They will depend on the lower level nodes. I’ve been able to do the aforementioned XOR problem, MNIST character recognition, training image recognition using another accurate model, and a GAN network. I’ve heave also made some examples of linear regression, basically mapping colours to space and interpolating between the points in space to see the colours guessed.


So when will there be a release?

When there is some documentation. In order to build these nodes there is a constant process of testing, breaking and fixing to refine them to the point where they should be very friendly too use for anyone, including those who want to learn machinelearning, try out new algorithms or even contribute to the expansion of the library. But at the moment it still has some fragility. The documentation process has just started, but I can’t put an exact date on it yet. Maybe this year.

I’d like to take this moment to thank the vvvv team for their tireless support and endless patience with me in this venture. Getting there

*There are always caveats


Very cool mate!

if you ever need beta testers, sign me up.

fun fact: last time I seriously dabbled in neuronal networks was in 1999 with Jugend Forscht. Might be fun again!


Ok, I’m not entirely sure what this is all about, but pretty excited to see it coming :)

Keep the teasin’ goin’ !

They say you code 90% of the project in 10% of the time…

Please still regard this as work in progress but I think its about time we had a public alpha. There may be some changes but hopefully nothing that will change anything significantly.


I’ll post here again, or make a blog post, explaining the how to start using it and the general ethos underpinning the design, but the demos should give you some indication and if you’re already familiar with machine learning with a library such as Keras a lot of the features here will become familiar to you.

There are one or two hairs out of place and I will provide some more documentation in the future, but feel free to ask questions, report bugs, and complain about what’s missing. If you have a particular use case you’d like to explore, you can share it here, make an issue on Github, or drop me a message.

I would recommend simply downloading the demos and installing the packages via the “install missing dependencies” button that you can find in the menu. Also, the demos work best with computers that have an Nvidia GPU and run vvvv Alpha rather than Gamma because the local files can be accessed correctly. Lastly, use the alpha mentioned on the page because it gives the least-buggy experience at the moment, but the devs are working on fixes.

Happy hunting!


The link to the suggested alpha is not working. Unable to find “vvvv alpha f7d5bf1879”.
Which version do you recommend atm?

@schlonzo VLML? you can install it from Nuget:
nuget install VLML
It should now work with both Beta and Gamma
Changes an updates can be followed on github

Testing with beta39, gamma 0015 and gamma 0140. I installed vlml via nuget. Took 3h+ to complete. So something fishy about that… on an almost fresh win10 installation. the VLML nodes are here, but none of the demos work:

unable to load the *.vl node within the vvvv patches. the file exists at the correct location. regardless if I try to drag it in, try to “open in patch” … it refuses to load the *.vl file. nothing happens. the gamma integration itself works fine. Every contrib help file I’ve tried so far opens and works as expected.

the *vl files open in gamma. but then I get the error that “VL.CoreLib.VVVV” is missing.
different vl files from the “demos” folder miss different core.lib versions. If I rightlick and try to install nuget complains that the package cannot be found.

Yes its a known issue. I’ve had to put off updating them to fix bugs, compiling issues and work on other projects. I have some updated versions but they are not tested with the latest library Nuget. I’ll update soon

1 Like

I can’t look into that at the moment. I’d only know by installing on a fresh machine or dismantling the set up on my development machine to understand what’s going on. But here’s a guess:

The VLML Nuget itself is around 3MB to download, but it uses CNTK 2.7.0. I think you’ve install VLML before, but earlier versions use CNTK 2.6.0. CNTK installs around 2GB of dependencies. These are

  • CNTKGPU: 119.49 MB
  • Cuda: 118.7 MB
  • CuDnn: 207.31 MB
  • MLK 38.77 MB
  • and OpenCVZip 29.73 MB

This makes it one of the chubbiest dependency chains on Nuget.org! You’d usually only download a few Megs, including dependencies for a nuget pkg, so maybe their servers are tune for loads of that variety?

If you install future updates of VLML, you probably won’t have that issue because you now have the main dependencies locally. All that said, it still sounds excessively long.

1 Like