Hi All

first post in the new forum :)

I am trying to figure out a way to measure the distance on screen between an on screen position, I input, and the on screen position of a corresponding vertex in a mesh.

sunep

Hi All

first post in the new forum :)

I am trying to figure out a way to measure the distance on screen between an on screen position, I input, and the on screen position of a corresponding vertex in a mesh.

sunep

not sure if i fully understand what you’re up to but isn’t intersect (3D ray ) not helping ?

Hello,

So you want to measure the distance in x and y between two positions?

If you’re not using view/projection transforms, then your objects’ xy coordinates can be compared directly, but if you’re using a camera or something you’ll need to compensate for it. I think Inverse (transform) should be able to help you there.

For instance if you have regular -1 to 1 xy coordinates that you want to transform onto the screen area you can use Inverse on the ‘ViewProjection’ pin from your camera and connect this to the input on whichever transform node you use to position the object.

So, I ‘think’ if you inverse the viewprojection of your camera, and use * (3d vector) with this and your object’s xyz position you should get an xyz result that represents the objects’ position onscreen, which you can then compare with other coordinates using say, Points2Vector.

I might be wrong, though.

Hope that helps?

Perhaps I should expand a bit on my explanation.

I am trying to develop a method to avoid the tedious measuring and adjustment when doing a 3 dimensional projection mapping.

I am basing this on the projector node and have so far come to the conclusion that I need to adjust the following parameters:

translation of the projector, X,Y,Z

orientation of the projector, X,Y,Z

Zoom Ratio

Lens Shift X,Y

As far as I can see these are these 9 parameters that are the challenge calculating when making a mapping.

I sat out trying to do the math so that if I located enough vertices from the geometry I could calculate backwards and get the needed settings or alternatively calculate view and projection matrices that could be applied to the renderer.

I gave it a go but my linear algebra is very far back in my mind.

so I have come up with another idea that is not as elegant as calculating everything but could work.

the idea is to manually position a number of points from the geometry. Then I would measure the distance from the manually inserted points to where they are on the projection using the initially inserted parameters on my projector node.

I will then add some noise to the parameters and use a genetic method with the average and variance of the distance from the manually inserted points on screen to the position on screen with the generated parameters as a measure of success.

The part of this I don’t know is the position in screen coordinates of an arbitrary vertex.

I will try the suggestions here but somehow I feel that the info must be present in a more direct way since the GPU already has done the calculation.

well I now forgot what my new question was but comments are welcome anyway.

sunep

Bitminster is right, except for the inverse. to transform a vertex from object space to screen space you need to put it in the world space, then view space and finally screen space, which is done by matrix/vector multiplication:

V_screen = V_object * M_world * M_view * M_proj

so get the matrices multiply them with * (Transform) and apply it to the vertex:

screenspace.v4p (8.2 kB)

thanks tonfilm, very useful. i tackled a similar problem on the big multitouch but somehow solved it differently. your approach looks much better, as always ;)