Hi everyone,

currently I’m trying to pass the coordinates of a tracked person to another device, that should then look towards the direction of the tracked person. To be able to do that, I need to convert the position of the tracked person to the coordinate system of the other device, of which I know its position and rotation (see image). However, I can’t seem to figure out how to continue from there. Can anybody help?

well you can do that with apply transform 3d vector, however if you do this “by hand” result won’t be that precise…

basically you can start from knowing that your kinect camera position is exactly in 0,0,0 point, so than you can apply transform to move whole thing whenever you want…

I was more stuck on the logic behind it, not the usage of different nodes. However, sometimes you get stuck on the simplest things. Of course I just have to apply the transformation from Kinect to device to the coordinates of the person in the coordinate system of the Kinect…

Thanks!

you can look https://rulr.hackpad.com/ep/profile/sxdojoxkRMW that, this software by Elliot Woods used to calibrate kinect and projector aligment, you can use it to get more accurate result…

this looks amazing! Will try it out asap :)

As @antokhio said, use ApplyTransform to get Skeleton points in the space of the other device.

You need to know to the transformation of the device relative to the Kinect. Kinect Skeleton returns you the position of the joints in meters, where coordinate system origin(0,0,0) is the depth camera.

If your device is for example 2D camera and you want to overlay the skeleton points to the output image, you need to calibrate intrinsics and extrinsics of the camera.

One technique to find a mutual relative positions of multiple (depth) cameras I recently started to explored is using photogrammetry software. In the example image you see a studio with three Kinects and dozens of 2D camera positions. I do have registrations (intrinsics and extrinsics parameters) for all the devices in the space, hence it’s easy to lets say view skeleton points from selected Kinect aligned to any other registered device.

If you think about it, exactly same process as in your picture is happening with (almost) every vertex when rendering every frame. Coordinates of an object (its vertices, could be Kinect Skeleton points) are transformed to the coordinates of the screen (which would be Other Device in your drawing).

float4x4 tW : WORLD;

float4x4 tVP : VIEWPROJECTION;

output.positionScreen = mul(input.positionObject,mul(tW,tVP));

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.