Projecting onto shapemorphing objects

Hi community,

I am wondering how i could achieve such a thing: I’ve got a black stage with a bunch of performers moving in white stretch fabrics. Sometimes there will be two or three actors sewed into one big sheet of that fabric, and as they are moving, the shape of that object will be constantly changing.

I’d love to project onto that. How could that be done? Using IR Cameras?

Is there any technology out there that helps me detect the shape of those objects and create a 3d-model in realtime?

as always, kinect might be your best bet…

@tonfilm ist there any example about how to get an abstract mesh there?

haven’t worked with kinect for a while, but i am sure someone has done it before…

Well Kinect would work only in low movement scenario latency is pretty dramatic for real-time projection, I would recommend high speed IR camera, vl.opencv and to make some kind of distortion/displacement map instead of proper 3d mesh generation

Basically what you need is a contour node, then you have to do post processing for each contour. Also artifacts…

This type of stuff is bit heavy for a beginner, since if you want to assign different effects for different performers you will also have to deal with ID’s and intersections… so you prolly have to start somewhere… I would recommend to contact with some one from vvvv community nearby…

I guess you can start by selecting a camera and I think you will need IR emitters and posting pictures of you performance…

Thanks @antokhio. I will give it a try. I have some old imaging source camera lying around somewhere, and will be posting a patch for that.

The problem I can imagine, is with the different lightsources. The beamer of course doesn’t emit ir-light, but the stagelights do.

So if I could efford to do the stage-lighting completely with LEDs, how is the IR-Lighting usually done? I mean, I would have to care, not to hit anything, but my actors, right?

Which IR-Ligthts are being used for that anyway? I got about 10x10m area to work with. Das

I think maybe @karistouf can help you with that bit…

Hi drehwurm, yes your task is a bit tricky and advance.

With Kinect is posible but yes as antokhio said is very slow for that.

with ir is also tricky but if your actors remain in same z ,you could calibrate camera projector and maybe with voronoid deluany try to get the polygon using some opencv contour routine … but also hard to get accurate.

for ir camera tracking i used ids ethernet or usb 3 with fiber both works well or cheap kit

the projector is one thing than can create lot of latency if not a good one in this quite old testing video the main latency was from projector , i tried kinect , ir , rulr tool from elliotwoods , markers ,ar patterns, and a few more tricks and techniques to see which was best solution for those sort of scenarios , the conclusion each specific thing required specific trick and its harder than oneself think. ;D

About ir light for big spaces installation i used a combination of old theather lamps with the pure color filters rgb ,those lamps emit a lot of infrared and are great for big spaces but filters burn quickly you,ll have to change every night and then for pro option or enforce those lamps .

This light combination was used to light up this big theather then track and mask those big moving panels xyz.

hope it helps. cheers.

Ok pretty advanced solution there ;) maybe you could start without building the geometry of your performers, just use their IR silhouettes as B/W masks for effects.

For cheap prototyping I can recommend an old ps3 cam with a removed infrared filter + a cheap IR flooder + a filter for the cam that cuts everthing beneath IR wavelength. ~50bucks.

but if you go into production you will have to spend some money on good cameras, filters and video capture cards.

new 3d sensors like intel realsense feel imo more low latency and offer higher resolution than kinect2. worth a try. Also very easy to interface via VL now.

@schlonzo hehe very true :D.

Yes if he goes for silouttetes that is a nice solution then keep the actors close to the back or in same z and calibrate with homography bezier…

@drehwurm As an instant fun initial test and results i was thinking you could start from here using kinectv2 player .

KinectPlayer.v4p (15.8 KB)

Wow, thank you guys for the nice feedback. I’ll have a look into your suggestions.Thank you for the patch @colorsound.

As for the IR-Solution. I had a look at some Lights, as Filtering down Stage-Lights might get a little hot. Do you have any experiences with this kind of Lights. Are they suitable?

Haven’t found time yet to investigate which way to got. I definitely need more than real human silhouettes. It’s humans in fabric that form pretty abstract shapes. But they don’t move fast. So latency might not be the main problem.

If I went with the Kinect2, I couldn’t track the whole 10x10, could I? Anyway… I would need to get the pointcloud then, and filter it somehow by Depth to crop out the abstract-actor-object, right?

Hi drehwum,

I think you,ll be disappointed by that IR lamp , i have not actually used this specific model but i have tried so many similar to this and they will most likely create a bright circle seen by your camera or not bright enough if you put it far. but you could try it out in case i,m wrong.

Yes probably Kinect option seems good ,Its less hacky than the IR lamp option to use kinectv2 but you will not get the 10 width meters most probably, I heard people was seeing depth from 9 meters or so , in my test from 3-5 merters is the best range then eaither to close or you get lot of noise if farther. You have other depth cameras like ZED Stereo Camera | Stereolabs which claim to see up to 20 meters away.

You could try 2 kinects and join the pointcloud maybe.

I pass you a kinect DepthFOV_Calculator_expr_Inverse calculator so you can calculate the image seen at specific distance in case you go that option.

good luck.DepthFOV_Calculator_expr_Inverse.v4p (7.1 KB)


If silhouettes will do then put your camera as close to the projector lens as you can an try and homography into alignent, works best if you can long throw the the projector as the angles converge. You can background subtract then you have a silhouette of everything not in the background. Stage lights with red + Blue filters give you the most IR in the easiest way, but watch out for filter burn, use the high temp ones. I’ve done this method and it worked pretty well.

any feedback on using this product compared to traditional Stage lights?

Hi @circuitb , stage lights i think are still great way to light big scenarios also light is more even diffuse controllable than with general ir lamps but filters burn. maybe some modding lamps with fans or other technique could slow the burning of the filters.

The Raytech are quite nice in general, they diffuse light better than other IR lamps and are more powerful. There are some models with power level poteciometer too.

For big spaces, stage lights seem to be best option and cheapest though ,unless you work by area like in a boygroup environment where you have 4-6 meters per camera per lamp.

Then something like the raytec seems better solution.

I also have to say that i have not tested all raytec models, I have seen some videos where they light big parking areas so it should be possible too with them i presume.

that,s my humble opinion. ;D

thx @colorsound for your feedback
for an outdoor long term installation the LED solution is the only way!
i found this crazy stuff
36 000 lumens but for 4000$…
i will investigate the DIY way since it’s just 200 x 3W IR 850nm leds
for the IR light emitter hotspot issue i guess the easiest way is to use many emitters to cover the whole surface with some postfx shader to lower the brightness on the hotspots.

Allright… I got the Kinect2 up and running, and I recorded some nice weird dance of myself inside a blanket. Playback works like a charm. So in the next step, I guess I should take the Depth-Texture to separate myself from the rest, but how is that done? I mean, what’s inside that Texture and how do I manipulate it?

inside depth texture there is a distance from kinect camera to the point in space
you should try particles pack kinect nodes they pretty awesome, and also give you a preview of 3d space and kinect so you can start playing around with that

The world texture contains the xyzw coordinates of the points in 3d space and saves the values in the rgba channels. That’s the colours you see. You can manipulate the data either by editing the pixels (please don’t ;) or writing a shader - but the easiest way to cut the background would be recording in front of black molton.

edit: I believe there is some kind of world filter in the particles pack, which is able to clip the background.