Forum

Can this really be done in vvvv?

He claims he “raymarched” this in vvvv, but gives no other details about how he made it, what parts of this do you think could have been done with vvvv?

Hello,

The video link you posted is done by UNC and he is pretty popular here in VVVV community. However in my opinion this work can be broken down into Video footage, glitches, RGB split(twitch) effect, and meta balls compositing into video.

  1. Video Footage - The video must be shot, its possible to import 3d models inside VVVV but i doubt you can set lighting as realistic compared to 3d applications. However VVVV is not meant for such thing.

  2. Glitches and RGB split - This should be familiar if you use AE or similar application. This is possible inside VVVV

  3. Metaballs - It is possible to make meta-balls more efficiently and real-time with all cool shaders and DDS file(similar to HDR image but for direct X)… but the tricky part in the video is meta-balls fits nicely and this is not possible without camera tracking for the footage if that’s a video.

However there is possibility UNC would have created room environment within VVVV, in that case no tracking needed. There are different ways to do the same, depends which one you choose.

Have a great day!

1 Like

non-realtime render! and there was an image with the patch but it looks like he removed it from description

with these nodes comes near you … node main mix, filestream,video texture, quads, transform, hsl, set alpha…

this here can shed some light… ;)

1 Like

sure. you may test power of unc shaders with TextureFX shipped with beta26 and 27

m4d…excellent document…

i think this is pure vvvv expect of course the environment map. The camera work really makes this. Information on the camera techniques would be great.

Yes, I am noticing the same point as xd_nitro… if environment is set in vvvv then no camera tracking is needed but in other hand some parts makes me skeptical especially the meta-ball fuses to long rectangular box and the box looks just as the part of environment. Maybe meta-ball and box got the same shader and made to fuse together when came close enough, imagining the whole scene is inside a sphere and meta-ball and the box are at the nucleus having environment map on the inside of sphere.

Its very straight forward to accomplish in 3ds max, maya and similar apps but if its done pure with VVVV then its a genius piece of work :)

Well thats why i was suspicious that this was live, cause the camera work doesnt seem like something you would get with audio analysis, actually the whole thing seems heavily produced, particularly the video background and the required tracking. So you’re saying everything in this is technically possible to do live in vvvv, except for wrap-around environments (if thats what he did)?

bonus question, what sort of computing power would it take to render this in realtime?

Hello Poof,

Take some time to go through the document m4d has shared above, this really makes sense. In the video i see some evidence it maybe completely VVVV or partially VVVV…here is why

Completely VVVV

  • The beginning sequence of the vimeo video you are referring has some transforming boxes doesn’t looks like it belongs to the background in terms of light and render output which makes me to think potentially transforming boxes were done inside vvvv and environment is wrapped knocking off any kind of camera tracking
  • The Document shared by m4d makes some logic

Partially VVVV

  • The video is not completely one piece, there is lot of cuts and edits been made, maybe the video is all about applying RGB split effect only and we are just overseeing.

About computer power - I am newbie for VVVV but in my opinion rendering this out shall not take much memory to worry about.

@poof: i say it again it’s written there in the description that the video you see is NOT REAL-TIME render but it’s still done in vvvv (using Timeliner for animation and camera movement (hence the cuts) and probably saving every frame to an image sequence). i guess it’s not real-time because raymarching in dx9 still a bit harder and slower than in dx10 or 11. even those 4k real-time raymarching demos require a high-end monster gpu to be enjoyable.

@danielmach: be sure to also check out his whole site! ;)