ONNX stand for open Neural Network eXchange. its basically means you can save a standard machine learning model from one of the many machine learning frame works like PyTorch, Tensorflow, CNTK etc, and run them in your program
With VLML ONNX you can run models with NVIDIA Cuda-based GPU acceleration for high performance. Included in this package is an example of how to work with images at to apply style transfer. It uses VL.OpenCV to achieve this, but it could use other kinds of data input.
In general this is a very efficient way of inferring (evaluating) machine learning models, probably faster than running an equivalent model from within any particular framework.
It isn’t possible to retrain an ONNX model. For this you can use VLML and convert the model into an ONNX model.
Requirements
recent version of vvvv Gamma
recent version of OpenCV
Nvidia GPU (GTX 7xx and upwards) with up-to-date drivers for CUDA.
VLML installed
There are planety of models you can try out here though I’m don’t know if they will all necessarily work with this WIP. If you have any questions about them this or VLML.
Got more to say on VLML but I’ll save that for the other thread
Those who were trying this WIP who don’t got VLML install may have been disappointed to find some CUDA related error that even installing CUDA wouldn’t clear up. I’m looking into it.
In the meanwhile, if you install VLML (the fat bit of which is CUDA 10 anyway), VL will find the installed CUDA pack and this contribution should work as intended.
Here’s how:
in VL/Gamma, click the Quad Menu button (the grey one in the topleft corner) got to Mange Nugets and press commandline.
This will bring up the command line. you can copy and paste this:
nuget install VLML
No need to reference VLML itself it just provides the native CUDA drivers in a way that VL appreciates
hi i have trouble getting it started. should this still work in current gammas? installed the nuget but getting this error. also downloaded the github repo and getting same error
Crikey, Hello mate
I haven’t touched it in a little while but just finished a project so I can look over it again. Also looking to do a big old rewrite of a lot of machine learning stuff of which this will be one of the targets but, that’s some way off. But fixing this in Gamma/Beta is very possible in the short term.
You needing it?
I found an ugly but efficient way to wrap my code in this snippet and use it in VL in order to convert ReadOnlySpan (ReadOnlySpan Error) as it was issued on this thread.
public static DenseTensor VLDenseTensor(Int32[] dims)
{
DenseTensor dtensor = new DenseTensor((ReadOnlySpan)dims);
return dtensor;
}
Absolutely, I have it in mind, but till then this is an option to anticipate the conversion problem with Spans. I am wondering why it didn’t work in the vl context before when I was trying to cast it again to a mutable array… Anyways, I have to check now how to create static methods as generic. Thanks for mention it!
@Hadasi as @Elias pointed out this was the most optimal way to alter the code and get Tensor & DenseTensor (as generic) in one line
public static Tensor CreateTensor(int[] dims) where T : struct => new DenseTensor(dims);