Distort Kinect depth image


has anyone tried distorting the kinect depth image, as if seen from another perspective ?

thing is, my kinect is pointing down like 45° , 2m high. what i’m trying to achieve is changing the depth texture as if seen from 1m , pointing straight.

transforming the worlddepth texture sort works but what i really would like to have is a distorted depth texture which i can pipet , like having a straight ray pointing into the room, just from another position than my kinect actually is.


what about rendering the kinect data as a point cloud from the desired perspective? you can colorize the pointcloud by kinect depth and then sample from a virtual camera POV.


@mburk absolutely, good hint. i was thinking the same but i was somehow thinking i could do it without the pointcloud, in 2D pixelspace.

anyway, will try this right away


if you come up with a cheaper way, I would be interested as well :)


@mburk , i’d love to share if i can solve this riddle
anyways, this is like your proposal. pipet from a different perspective works but not happy yet…damn, this looks so easy, but its not.


It sounds like a static transformation, so a LUT should be able to dot he job, no?
If you can calculate this lut, which is a skewed space if I’m thinking correctly, there is a 3D lut shader in contributions


@eno , not sure if i can follow you here, its not just a color transformation. i’m already doing this, transforming the world texture . but what i’m actually aiming at is resampling the world texture (right image) to build a new world texture which looks more like the left image. notice the arm is in a different position


You’re right … I thought it could be expressed as a color transformation. But then you cannot sample from that other perspective easily.

What if you make a compute shader that writes the xyz data (pixel color) to a new uv coordinate, which depends again on the pixel color multiplied by a transformation matrix? Then you can sample based on the uv of the virtual camera.
Of course it might be that some data write to the same pixel/cell and you’re loosing some information. But that is also what happens with the point cloud, that it overlaps from another perspective …

Isn’t a compute shader is easier to use in this regard than using a pipet, and has the same cost with the read back?


then of course you have the problem with the holes, where you cant sample inbetween.
maybe a vertex shader is it, returning a closed topology, which you can render and sample from again


Hey Chris, well i think you over-complicating simple stuff, first of the World Texture is actually and XYZ coords written to texture. So… what we know from that is that 0,0,0 is a Kinect origin, so basically by applying some transform to kinect world texture you are actually moving kinect camera, located in (0,0,0) around your scene… Technically that should be enough to position kinect against origin of your scene.

P.S. I would look on to tmp’s kinect calibration stuff, might be that is exactly what you need…


My first thought here is you just need a 4x4 transform to the world colour data, have your tried that, and it didnt work?


@antokhio and catweasel

i’m afraid its not the whole story, rotating/adjusting the original XYZ as seen from another perspective can be done like you say, but then i want to sample depths straight from a grid of points, sort of a heightmap from this transformed space.

@eno, everything you say is right, you got the problem. i’ll consider the compute shader way. on the other hand at some point the data has to be in CPU land anyway to send it via network.

i talked to colorsound about it and he did the same thing before, its basically as @mburk stated, sampling the pointcloud from a different view. the missing bit i was testing and colorsound confirm it, the pointcloud needs to be rendered in orthogonal projection to avoid distortion of the pointcloud. this works.

i’ll stick with that for now, but will keep thinking about how i can avoid that pointcloud pass.
here is the result…

thanks colorsound and everybody for this discussion.