if that was you, i would consider rooting the original view and projection matrices to where you need it.
if it wasn’t you, you probably can’t tell how they were built; like: is there a scaling component in the view matrix? is there a translation component in the projection matrix? if anything of those is true it gets definetly impossible.
if you can trust on
** view matrix is built of position and orientation only and
** projection matrix is built outof fov and aspect ratio only, then it could be possible.
I’m getting a single transform from a pade’s approximant plugin that seems to be the view and projection combined.
This successfully transforms a renderer’s view transform to display a correct image from a projector on an object after calibration.
I’m trying to use this to work out the translate and rotate transforms that would be necessary to position a projector node to give the same view of the object.
I configured a perspective (transform offcentre) with the throw ratio and lens shift settings of the projector being used, then multiplied the inverse of this transform with the view projection from pades. Then using another inverse and a decompose node I have found values for the position and orientation of the view.
These values have positioned the projector node in what appears to be almost the correct place, but not quite.
Oddly, when adjusting the near and far plane of the perspective node, the translation of the view position changes in Z drastically. Do you have any idea why this might be? The scale output from the decompose mode shows a Z scale of 8 for some reason. I suppose Pades finds it hard to calculate the correct Z scale?..
i had a look at it again, and now am convident, that the problem is the decompose node, which never was designed to be able to decompose matrices that have shear or perspective components in them.
but what helped in my example, was to decompose at another point in the patch, where perspective distortion didn’t influence the crucial places in the matrix, that decompose needs to extract position, scaling and rotation information.
what you got is a vp (view projection matrix).
you want a inv v (to position the camera and rotate it in world space; this is the inverse of seeing the world through the camera (v)).
vp = vp;
-> vp * inv§ = vp*inv§ = v;
at this point the decompose fetches position and rotation information of the world relative to the camera and rebuilds a matrix ignoring the scaling of the world relative to the camera…
now take the inverse and another decompose to get the actual position and rotation of the camera within the world…