I have an abstract animation made with C4D (with cloth tag), and baked in PLA (Point Level Animation).
I’d like to play this animation in vvvv/dx11.
A can’t export it via Collada, as it doesn’t support PLA, only Position/Rotation/Scale animation.
But I can export it as FBX. Assimp seems to read FBX files in VVVV. But I can’t get the animation.
I found a lot of similar questions on the forum, but no answers so far:
My only hope seems to be the FBX4V contrib as demonstrated by Microdee here : https://vvvv.org/blog/i-am-from-fbx4v
But Microdee, you talk about Blendshapes in this article, not PLA? Will your contrib be able to play PLA animation too?
Or do you think of any other way to export a cloth animation to VVVV?
Hi, proper 3d formats support is one of the most painful topic in vvvv …
Anyways your option for now is to bake each frame of your animation as an obj file and use geometryfile assimp to load spread of files, then get slice from there for each frame.
There are more elegant ways you can do this like(baking positions in to texture, or binary read PLA), but, the obj one is simplest and effective for short animations…
unfortunately FBX4V won’t support PLA/Vertex Cache either (and tbph there’s not too much chance something like that is getting made for vvvv out-of-the-box). for clothing however baking that into blendshapes will do the job (and FBX4V supports it too). Currently I’m busy with life and a totally unrelated project right now, but when I find the time it will be released soon.
well how much is weight of sequence?
is there any errors if you load just one file with GeometryFile (assimp)? Like you have texturecoords, you can see object in the renderer?
about baking positions, to dds you have this forum post Render Shading to Texture - #2 by antokhio , what that does is rendering whatever you want as a texture for your mesh so you can put vertex positions as color, record them as dds and then sample them after in the vertexshader.
sorry for the late response, I needed time to experiment on your patch!
Thanks for the quick answer, and thank you for the Position2Texture shader, it’s brilliant!
It works great, but I have a problem: it sometimes deforms the mesh in the Y axe.
This maybe due to some sub sampling problem, e.g.
uv cord points in place just near the border of texture and because of the linear interpolation you have this place black, witch would result a vertex in the 0,0,0 point. This type of techniques applys to grids with complex models it depends on quality of uv mapping quite a lot I guess. I’ll maybe able to do few more test next week, so far I didn’t found where the mistake might be. I think mesh is too complex…
Yes, that’s it! (the vertex in the 0.0.0 point)
Thanks for finding it
After some investigations on multiple models, the behavior is always the same: holes in a mesh are resulting in black areas in the PositionToTexture picture.
So in the final renderer, all the holes of the model are linked to the 0.0.0 point.
When I use a complex object (and complex uv) without holes, everything is working perfectly.
Here is an example patch with a simple cube with holes and cubic UV:
Hi, this will be possible to do a test in geometry shader e.g. if any point of triangle in 0,0,0 don’t draw this triangle.
I’m pretty sure this happens due to some subpixel offest somewhere (e.g. rounding error), anyways here you go discard version, for 3d scanned stuff it should work.