I have a set of 9 meshes from a .dae file, it is meant to be used for a mapping project.
All the objects have the same origin index, in this way when I load them in vvvv they are properly positioned in relation to each other.
My problem is that if I try to change the transformation matrix of just one object (I am using a spread of 9 transformations with a cons(Transform) node) for fine tweaking, the trasformation is referenced to the origin which is outside the object while it would be better to have a transformation with origin to the geometry of the single object.
If I change the origin to each object’s geometry in Blender, then the objects don t preserve the distance in relation to each other.

Another problem that I am facing is that altough I can easily apply a transformation to each single object, I cannot understand how to set the proper spread of transformation for the vertices.
Since they are all cubes (6 faces, 24 vertices) I d need to apply the first transformation to the first 24 vertices, the second transformation to the second set of 24 vertices, etc.

about transfromation hierachies i can think about two things:

1.be sure to have individual subsets on each faces (by assigning them materials)you want to have control on in vvvv. Then in vvvv each subset wil be an individual slice of the .x or .dae file.
2.the use of bounding box/spheres (inside your 3d modelling program) allows you to create and move the transformation point origins.

material and texture will come later, but I am fairly ok with the idea of using uv-mapping, it works about the fact that texture translation do not work on drawfixed ???)

I can easily change the origin of each element where I want, the is that

case a) origin is the same for all objects: i get the proper relation of distance among them, but scaling in vvvv is a bit difficult because the origin is outside the object

case b) origin of each object is its own geometry, in this case I can easily transform each object in vvvv but objects lose their relative position that we so hardly figured out with photometry…

look at the transform out of the collada mesh node. you should get one transform for each cube. to scale each cube seperately you’ll need to apply the inverse of the base transform (that one from the collada node), then do your scaling, then do the inverse again and feed that one to the shader.

so the node chain should look something like this:
transform out -> get slice -> inverse -> scale/translate/rotate -> inverse -> cons -> shader transform in

Hi
ok I am looking into it right now… thanks a lot, that works fine except for rotations, i had to move rotations out of the inv/inv transformation matrix so that they use as pivot point the origin of the object and not the center of the renderer.

transform out -> get slice -> rotate -> inverse -> scale/translate -> inverse -> cons -> shader transform in

Now the last piece to have it right is to be able to have the objects rotate along their own reference-axis and not the global ones, is that possible?

you sure you didn’t just mix up the matrix multiplication order?
i mean it’s getting confusing sometimes and
inv -> rotation -> translation -> inv
is something different than
inv -> tanslation -> rotation -> inv

hi
I tried to create 3 cubes with Blender and after exporting the .dae to vvvv I can see the correct behaviour: transformations before the inv -> transform -> inv are local, while the transformation between the inv nodes is global.
But on the .dae file I using for this project they behave differently I am attaching the 2 sets of file in case you can have a look at it.

in your patch you connected the TransformIn instead of the Source pin of the inverse node in 4 cases. you sure that’s what you wanted to do?
and yes
transform out -> translation / rotation / scaling <- local space
transform out -> inv -> translation / rotation / scaling -> inv <- global space

worked here perfectly. with both dae files. i’ll attach the patch.