Forum

VLML.ONNX

Punting this out there before the weekend properly takes hold

ONNX_Dev.7z (3.9 MB)

What is ONNX?

ONNX stand for open Neural Network eXchange. its basically means you can save a standard machine learning model from one of the many machine learning frame works like PyTorch, Tensorflow, CNTK etc, and run them in your program

With VLML ONNX you can run models with NVIDIA Cuda-based GPU acceleration for high performance. Included in this package is an example of how to work with images at to apply style transfer. It uses VL.OpenCV to achieve this, but it could use other kinds of data input.

In general this is a very efficient way of inferring (evaluating) machine learning models, probably faster than running an equivalent model from within any particular framework.

It isn’t possible to retrain an ONNX model. For this you can use VLML and convert the model into an ONNX model.

Requirements

  • recent version of vvvv Gamma
  • recent version of OpenCV
  • Nvidia GPU (GTX 7xx and upwards) with up-to-date drivers for CUDA.
  • VLML installed

There are planety of models you can try out here though I’m don’t know if they will all necessarily work with this WIP. If you have any questions about them this or VLML.

Got more to say on VLML but I’ll save that for the other thread

H

7 Likes

looks like it needs a ONNX v3 model to work… at least i tried all the styletransfer models from the zoo and none of them works (and they’re all v4).

can you confirm that?

by the way, it’s great :)

2 Likes

Thanks, Sebl

Yeah, the ones in Model Zoo are particularly weird. I tried them with VLML which should support version 2 and they were unhappy their too.

Try the ones here instead.

2 Likes

Quick update

Those who were trying this WIP who don’t got VLML install may have been disappointed to find some CUDA related error that even installing CUDA wouldn’t clear up. I’m looking into it.

In the meanwhile, if you install VLML (the fat bit of which is CUDA 10 anyway), VL will find the installed CUDA pack and this contribution should work as intended.

Here’s how:

in VL/Gamma, click the Quad Menu button (the grey one in the topleft corner) got to Mange Nugets and press commandline.

This will bring up the command line. you can copy and paste this:

nuget install VLML

No need to reference VLML itself it just provides the native CUDA drivers in a way that VL appreciates

Apologies and enjoy.

H

1 Like