cccc4D | All about interfacing/exchanging vvvv and Cinema 4D

Hi vvvvolks!

I crossed the default questions out, because I want this a more general shout out to everyone who uses
or would like to use vvvv in combination with Cinema 4D.

So what topics come to a patchers mind when thinking of gray quads and gray cubes?

Here is my start, but please feel invited to add your ideas:

Use-cases:

  • 3D content creation in Cinema 4D and make use in vvvv application
  • use MoGraph data in vvvv setup
  • Render vvvv generated content in Cinema 4D

Please share your usecases! And if you know a (wannabe) cccc4D user, please point him to this thread. thx!

7 Likes

I think of sending Realtime Data from vvvv to C4D and bring the OpenGl View out of C4D back to vvvv or putting it anywhere else.
We would have to develop a OSC input/output for C4D. fOSC (https://github.com/fillmember/fOSC) ist a first step, but it only works as a listener.

1 Like

Very interesting topic…

i have absolutely no idea why i got an email notification about this thread. but anyway. here’s something i made to use C4D as an OSC timelining tool for VVVV :

it makes any c4d object into an OSC source which can be sent to another app.

note : supports ‘User Data’ (i.e. arbitrary data channnels), splines and transforms

And for the other end:

between the 2 you can live manipulate graphics in vvvv from within c4d, sequence them
then export sequences as json (effectively serialised OSC) and play it back through the same OSC nodes later but direct from file (without network)

1 Like

was used for:

ah i see the mention now. :)

+1 nice topic

I’ve got some examples and new noodle nodes could throw in- mostly for geometry vvvv -> c4d. Actually mostly just obj so software agnostic, but have some interesting things (eg tiled marching cube renderer)
@elliotwoods somehow missed that sequencer thing, will have to take a look (cool project too of course).

1 Like

Ah! Melange would be so useful, I’ve not come across that before, that would make a good bounty request for a plugin, or maybe VL devs? hehe

+1 for the topic!

My main c4d > vvvv use case so far is definitely the first one you mentioned @max_onion, geometry + texture + animation c4d export / vvvv import.
Mainly regarding real-time preview of installations in architectural context.

To sell an idea one usually needs a decent preview of the piece, be it an arrangement of LED walls, some mix of light sources and screens, or even kinetic installations.

For that reason I usually model the architecture in c4d, add some GI lighting, bake everything, then recreate the scene in vvvv, ie. using vvvv along the lines of D3
Then I start working on the real-time, reactive or generative part in vvvv.
Also, down the line, I really like it for developing my patches because I then always have a realtime preview of how the output will feel in space. Even more so if you look at it in VR.
And simulating spatial interactions is more realistic. Parameters like “how far does someone need to walk to cause a certain effect” can be anticipated to some extent.

Now, the less fun part: the current workflow.
So far, I had imported many separate collada files in combination with textures, which sucks with big scenes and nested transforms. N-gons cause trouble, untextured or non-uv-mapped-geometry as well, then big geometry files on vvvv 32bit, and so on.

A neat vvvv import node would be indeed very sweet.
Would love to help!

One more usecase, category experimental.

vvvv > c4d mograph

Usecase:
I want to capture several takes of particle behaviours influenced by some real-time input, then select the best one and continue working on it in c4d.

Workflow:
I exported particle matrices from vvvv as binaries per frame.
Then loaded them via python into c4d.mograph.
Or into octane render via the python effector & the scatter object, which was the primary goal.
That way you might completely skip the polygon count limitations in the editor, as with the scatter object the polygon generation happens on the graphics card.
Unfortunately there are some limitations in the python effector’s max array size, but maybe there’s a way around.

Alternative: skip the particle behaviour generation in vvvv, use OSC, xparticles and capture in c4d.
I don’t know if this is as real-time and also, as easy to setup.

Hello again!

So now I’m curious about VVVV scene into Cinema 4D (e.g. export to OBJ)
One way could be some sort of low-level Renderer (DX11 Export) type of node, which accepts DX11 Layers, and per frame renders the output of the vertex/geometry shader (would need to use standardised semantics for texture coordinates) and exports to OBJ/other. This way you could export any scene which you can connect to a DX11 Renderer.

I presume this might be even easier (or at least significantly different) under Xenko.

You can try my obj exporter works quite well Calculating from image a mesh, and bonning and animating it?

Need to fix subesets tho it’s not gonna work with layers