Kinect and Projector Scene Setup Workflow


I seem to miss a fundamental understanding about projections there. So I have objects to track with the Kinect. What I am getting there is basically a WorldTexture. So with some Filtering and Processing I can change that Texture in a b/w mask, that is well… a Texture

How can I align that mask with the 3d space. When I calibrate the Kinect, all I get is the sensors position in Space, but I need the mask to match with my projectors view, don’t I?


Hey, well that is totally possible and easyist one is with rulr… sadly there is no tutorial…

In general your world texture rgb color is position xyz of that pixel against your Kinect.

There are few other ways to get it going but rulr so far is best and shortest…

@antokhio unfortunately rulr is poorly documented. I found a few things, but they seemed applicable, when kinect and projector a close to each other.

but, what would be the old fashioned way. I use the dx11.parti les calibrator, and i get the kinects transform, i use the projector calibator and i get the projectors transforms. how do i match up that mask i created?

Edit: I should explain my workflow so far maybe.

  1. Getting WorldTexture
  2. Filtering for a narrow Region, approx 4 m away, and 2 m deep. this is pretty much giving me 2 abstract, slowly moving objects.
    3a. Processing the filtered Texture to get a b/w mask
    3b. Processing the filterd Texture with VL to get the Objects Centers and Dimensions (only in 2d)
  3. Aligning 2 Quads with the coordinates from 3b
  4. Applying the mask from 3a on the whole screen

This is giving me a good masking of the objects, but only from the kinects viewpoint.

Can I just Transform that whole setting to fit with the projector?

Well it’s normally done, by getting some point 3d position in projector space, then by getting same point in kinect space, then it runs some fitting algorythm to allign both spaces together.

I know that’s sound hard but you have to try with rulr, it’s not that hard and you will have perfect aligment.

the procedure is like here: CalibrateProjector (VVVV) - Demo - YouTube
you take very flat surface (wood or plastic), you Project chessboard on it, then you press

Add Capture, after few shots you press calibrate, then there is export somewhere…

take the rulr from here

@antokhio thanks for your help mate. Bevor I take this to the stage, I have to ask one more thing (I hope just one more): does this calibration work with the Kinect and Projector being far apart? The Projector is 10 m behind and 3 m above the kinect. So the projected chessboard will cover the whole stage where the kinect is positioned…

There is size and position on chess board in rulr you have to move it around from interface… to have better alignments you have to do move it around…

@antokhio Thank you for the rulr hint. It seems like the calibration I made there is fine (and supereasy). But there is something wrong when importing the matrices to vvvv. It seems like the Projector is pretty much somewhere else. This is a comparison between rulr’s projector position and vvvv (using the node15 workshops demopatch).

Hi, as i remeber there were two types of output somewhere one for vvvv and another one for open frameworks. And second option would be to retry the calibration as i think i had something simmiliar with projector was some where in the outer space…

@antokhio I checked with both export options. Only the ofx-Version is useful. I also recalibrated, but that doesn’t seem to be the problem, since the projectors positions is displayed correctly in rulr. there must be something wrong in converting the ofx-matrix to vvvv-transforms

EDIT: I just tried to export a Projector, which I’ve manually set in rulr, to see if it’s the same when importing to vvvv. It is not. A Projector with position coordinats of 1, 1, 1 in rulr gets converted to be 1, -1, -1 in vvvv

I think it wasn’t ofx export, there should be an module with workshop material. Try to attach both files I check what’s inside

I finally made it. It is the ofxray camera to import. Just when using the Reader (ofxray) inside the module the view is scaled to -z. That being corrected everything works fine.

Now there still is the question open, how I can lay a kinects texture over the projectors view. That texture needs to placed on a quad, but that quad is where in space?

can you record what’s happening with some screen capture like OBS Studio

I’ll try to grab my kinect and projector, so we can speed that up

anyways what i’m guessing is that you need yo project that stuff on heightfield from kinect point of view then it’s gonna be properly projected

@antokhio I’ll try to record soon, but basically all I need, is to reproject the kinects world texture back.

Hello drehwurm,

If you want to reproject the kinect world texture so to project on top of anything it sees.

I would do it this way :

Convert the world texture to a Grid mesh or use a pointcloud instead.
Use a long lens ratio projector.
Put some boxes in the corners , center or other things that can give you a good reference of the area you need to use, you can take all those boxes after calibration.
Use this to calibrate and match you mesh to the physical projection so to get the right camera transforms view projection .
Do a mask texture of your world texture of the desired part.

Hope it helps ;D

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.