VLML.ONNX

Punting this out there before the weekend properly takes hold

ONNX_Dev.7z (3.9 MB)

What is ONNX?

ONNX stand for open Neural Network eXchange. its basically means you can save a standard machine learning model from one of the many machine learning frame works like PyTorch, Tensorflow, CNTK etc, and run them in your program

With VLML ONNX you can run models with NVIDIA Cuda-based GPU acceleration for high performance. Included in this package is an example of how to work with images at to apply style transfer. It uses VL.OpenCV to achieve this, but it could use other kinds of data input.

In general this is a very efficient way of inferring (evaluating) machine learning models, probably faster than running an equivalent model from within any particular framework.

It isn’t possible to retrain an ONNX model. For this you can use VLML and convert the model into an ONNX model.

Requirements

  • recent version of vvvv Gamma
  • recent version of OpenCV
  • Nvidia GPU (GTX 7xx and upwards) with up-to-date drivers for CUDA.
  • VLML installed

There are planety of models you can try out here though I’m don’t know if they will all necessarily work with this WIP. If you have any questions about them this or VLML.

Got more to say on VLML but I’ll save that for the other thread

H

7 Likes

looks like it needs a ONNX v3 model to work… at least i tried all the styletransfer models from the zoo and none of them works (and they’re all v4).

can you confirm that?

by the way, it’s great :)

2 Likes

Thanks, Sebl

Yeah, the ones in Model Zoo are particularly weird. I tried them with VLML which should support version 2 and they were unhappy their too.

Try the ones here instead.

2 Likes

Quick update

Those who were trying this WIP who don’t got VLML install may have been disappointed to find some CUDA related error that even installing CUDA wouldn’t clear up. I’m looking into it.

In the meanwhile, if you install VLML (the fat bit of which is CUDA 10 anyway), VL will find the installed CUDA pack and this contribution should work as intended.

Here’s how:

in VL/Gamma, click the Quad Menu button (the grey one in the topleft corner) got to Mange Nugets and press commandline.

This will bring up the command line. you can copy and paste this:

nuget install VLML

No need to reference VLML itself it just provides the native CUDA drivers in a way that VL appreciates

Apologies and enjoy.

H

1 Like

VL.ONNX updates are currently on Github here. I’ll have to do something about how to package the binaries, maybe using the github releases feature

2 Likes

hi i have trouble getting it started. should this still work in current gammas? installed the nuget but getting this error. also downloaded the github repo and getting same error

image

@Hadasi Do you have any updates on this project, are you still persuing it?

Crikey, Hello mate
I haven’t touched it in a little while but just finished a project so I can look over it again. Also looking to do a big old rewrite of a lot of machine learning stuff of which this will be one of the targets but, that’s some way off. But fixing this in Gamma/Beta is very possible in the short term.
You needing it?

I found an ugly but efficient way to wrap my code in this snippet and use it in VL in order to convert ReadOnlySpan (ReadOnlySpan Error) as it was issued on this thread.

public static DenseTensor VLDenseTensor(Int32[] dims) { DenseTensor dtensor = new DenseTensor((ReadOnlySpan)dims); return dtensor; }

Nice.
You’ll want the tensor to be generic, but cool!

Absolutely, I have it in mind, but till then this is an option to anticipate the conversion problem with Spans. I am wondering why it didn’t work in the vl context before when I was trying to cast it again to a mutable array… Anyways, I have to check now how to create static methods as generic. Thanks for mention it!

Maybe delegates perhaps ?

@Hadasi as @Elias pointed out this was the most optimal way to alter the code and get Tensor & DenseTensor (as generic) in one line

public static Tensor CreateTensor(int[] dims) where T : struct => new DenseTensor(dims);

and this is the result

image

1 Like