Neural style transfer possible in vvvv?

Hey there,

i am interested in doing some neural style transfers in vvvv.
Is it possible to do those in vvvv and is it even possible to render them in realtime?
There is nothing to be found in the forum.
I am thinking about transferring styles between two moving images but my guess would be that this is way to overkill? That would definitly result in some very trippy stuff ;)

Hi @knoeterich, did you take a look at VLML.ONNX?

Ah nice, i’ll check that out.

Thanks a lot :)

You can try onnx with some style transfer models here. If you want to use specific images you will have to train them yourself. There is a good guide here that uses Pytorch. You need to include a line somewhere which will export the model as Onnx. Its a lot more cumbersome with Tensorflow

Regarding the second part of your question, realtime style transfer is pretty tricky in general, as you would need to set up two running networks. That requires running python at the moment because haven’t been to successfully train that kind of system with VLML. If you have experience with Pytorch or Tensorflow you have to setup the system and pipe data from vvvv to python and back into vvvv. I’ve seen it done before around here somewhere

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.