Hi all

I’ve been working on a machine-learning library using Microsoft’s CNTK framework in VL.

CNTK pretty much plays completely happily in VL but to use it as a patcher its a bit of an up hill struggle, so I’m simplifying the nodes to make the experience as user-friendly, readable and informative as possible.

Here’s the goal:

To prepare a machine learning library that can be used

  1. to develop new and sophisticated algorithms for researchers, developers and creative experimenters and

  2. to drop-in easy to use nodes that carry out ‘smart’ actions without having to learn how to write a neural network. (You can look at something like Wekinator or Lobe for reference). imagine wanting to throw a style on a video for instance, you would just pass it through a VLML filter pre-made or of your own.

For the first step I’m having to write and wrap low-level functions so that they are easier to use and understand or ignore entirely. In order to streamline the process I broke down the key parts of how to make a graph:

Inputs, Network, Error/Loss Evaluation and Training*.

A picture says a thousand words so here’s an example of how the development is playing out.

Before: Using mostly original nodes

After: Current Node design.

This is an example of training a neural network to solve XOR (that is: when the inputs are the same return false. When the inputs are different, return true). The picture above shows the nodes required to do this operation with the near-raw CNTK node set, and the next image is the developed nodes that can simplify the process.

A key focus of this effort has been on the idea that the visual representation of the network may make it easier to follow how an the algorithm is produced. With this in mind there is an implementation of debugging on the network which allows users to inspect the network data on the pins of the network nodes themselves. Already in my testing I have been able to fix mistakes by simply checking the shape of the output of the nodes.

The node set itself is at a healthy state, but many of the higher level function I would like to add are not there yet. They will depend on the lower level nodes. I’ve been able to do the aforementioned XOR problem, MNIST character recognition, training image recognition using another accurate model, and a GAN network. I’ve heave also made some examples of linear regression, basically mapping colours to space and interpolating between the points in space to see the colours guessed.


So when will there be a release?

When there is some documentation. In order to build these nodes there is a constant process of testing, breaking and fixing to refine them to the point where they should be very friendly too use for anyone, including those who want to learn machinelearning, try out new algorithms or even contribute to the expansion of the library. But at the moment it still has some fragility. The documentation process has just started, but I can’t put an exact date on it yet. Maybe this year.

I’d like to take this moment to thank the vvvv team for their tireless support and endless patience with me in this venture. Getting there

*There are always caveats


Very cool mate!


if you ever need beta testers, sign me up.

fun fact: last time I seriously dabbled in neuronal networks was in 1999 with Jugend Forscht. Might be fun again!




Ok, I’m not entirely sure what this is all about, but pretty excited to see it coming :)

Keep the teasin’ goin’ !