Lensdistortion when moving objects between two renderer

Hello again,

i have trouble understanding the distorin pin on the “camera (transform softimage” node.

i want to move a quad in circles around two cameras which are looking north and east like in the example patch.

the problem is that the camera is distorting on the edges of the two renderer which makes the impression that the object comes nearer when moving the quad from one window into the other with the rotating node.

i tried to connect transform and all the different perspektive nodes to the distortion pin of the camera but it didnt work as i wanted it to.

are there other nodes that are good to use for?

ore isnt it possible to get it working?

thanks for the help :)

LensdistortionExamplePatch.v4p (20.9 kB)

For this you should use MultiScreen (EX9), it will take care of that for you.

check the help file for instructions on how to use it (F1)

ok i tried it and i am still having the same troubles with the edgedistortion of the camera as before…

is this because of the rotation of the camera? i had to actually move their initial interest a bit after rotating them.

when having a flat matrix of screens it was working fine.

but is it also suitable for 90 degree pointing cameras like in the picture sketch i uploaded?

sorry for asking that much…

Multiscreen test.v4p (41.0 kB)

ah no jpg…

i made another noobish for my problem ;)

do i need something like a model of the room in 3d to undistort the Image in the renderer to get a correct quad in each corner of the actual room?

This called projective texturing, there is tutorial on video mapping https://vvvv.org/documentation/how-to-project-on-3d-geometry
Also if u want to have it for four walls same time u need to check cubemap renderer for dx11

great, thanks a lot :)

you also have to be aware that this illusion only works from one single point of view. the more distance the observer has from this point the worse the illusion gets…

The last image you posted shows some evidence of the basic difference of quads facing the observers viewpoint (the one checked green) and quads staying in parallel to a viewport/screen plane (billboarded “sprite”, the one X’ed in red). The latter will hardly ever look good, the former solves problems of continuity between hard edges even with natural perspective (as you would have with every cubemap, btw, with an automatic fov of 0.25 each side).

For the quads, take a LookAt node, feed the quads’ position into Position, and for Target use the central point of the best user experience (which would be the singular position of your camera spread). The resulting Transform needs an Invert before going into the quad.

Not sure if this is your problem, so with a grain of salt, it is valid going to almost never think about “screen space” in a 360° environment. Try to mind the eyepoint of the user instead, the thing I like to call “Point of Immersion”.

It is not exactly clear, what you mean with cameras in your pics, so I presume they are not physical, but actually a spread of camera modules in your patch to represent physical projectors.
In case I am guessing right, don’t use those to capture the original 3d scene. Instead, make a cubemap or a texarray from a central location (point of immersion, right), and then use a second renderpass to project onto a virtual representation of your canvases to ease dealing with actual beamer settings independent from any sweet spot considerations.

Anyway, measuring your physical setup and adjusting your virtual representation seems key for any installation like yours.