I would suggest compositing animation onto face after shooting rather than face projection. This technical level of face projection is very very hard and costly to do - I remember reading about process behind the reference and it was really not easy.
Hello StiX,
thank you for the reply!
I’ll wait with a decision till next week. Maybe I get more opinions.
But anyway:
Do you have a recommendation for somebody how can do a composite animation afterwards?
And do you maybe have a reference where this was done?
Would be great.
Greetings
I think StiX is pretty much on point. It’s impossible to achieve these results with normal off-the-shelf hardware. Here are two links to a commercialized version of the projector that they (most likely) used:
You can try to get a quote from them but I really doubt it will fit your budget.
From the description of an other video from the same studio (emphasis mine):
The face mapping system made it possible to follow intense performances, which was impossible until now, thanks to the use of the state of the art 1000 fps projector DynaFlash1 and ultra high speed sensing. The initial dilemma of speeding up the tracking to the detriment of performance latitude was resolved by the WOW team, Professor Watanabe, and Tomoaki Teshima (EXVISION), who trimmed several milliseconds during a trial and error period that lasted approximately three months, enabling the completion of this system2. The projected image looks like it is integrated into part of the skin, and the expressions on a subject’s face, when it is distorted or transformed, are exponentially enhanced.
*1: Jointly developed by the Ishikawa Watanabe Laboratory at the University of Tokyo and Tokyo Electron Device, and commercialized by Tokyo Electron Device.
*2: Dynamic projection mapping technology, developed by the Ishikawa Watanabe Laboratory at the University of Tokyo, was used for hand tracking and projection mapping. Face mapping technology, developed by WOW Inc., was used for facial tracking.
Looking at the video you posted once more, the dancer/model/actress doesn’t seem to move at all, only the camera does.
(Also the projection is colored and and afaiu the dynaflash projector is only capable of b&w.)
If this will also be the case in your video it makes things quite a bit easier, since lag won’t matter that much. Still, doing the stuff in post will be more efficient I guess.
‘Are we talking about live texture map on top of the facetrack mocap animation?’
Sounds right but I guess there are different ways to achieve something similar and I am open for ideas. Aside from that I am aware of the limitations due to the very expensive hardware as well the time given.
(3 weeks from now on)
I’ve done something like this with the face HD tracker node with a Kinect, slightly laggy but works ok, if anyone in DE wants to give it a shot. It was just a test for a potential theatre piece, but the mesh stuck pretty well to the face, and distorted with the mouth.
@catweasel
Hi, I’m a student from germany and we are currently trying to build something similar with vvvv gamma and a Kinect2 but we are relatively new to this and have pretty much no idea where to start / what to do.
Would you mind sharing your patch with us or tell us how to use the data the kinect / the faceHD node spits out?
Hi Scroll, I used vvvv beta to do this, it comes with a kinect mesh for the face, which means you can use it directly, you need to calibrate your kinect to the projector, and there is a help patch for that, (it might be one of the particle libraries) in gamma, I’m afraid it will take a bit more work to gather all the parts. I’ll see if I can find my face tracking patch though for you to have a look at.
Ok, I’ve found my patch, I was actually using homography to line up the mesh and the face, I’ll have to go through and see what I can include, also it seems I wasn’t using faceHD, which gives you facial expressions, but a straight mesh. So it would probably be worth starting from scratch and using the faceHD and try lining up with homography (4 point corner warping) as long as your subject is fairly static, that should be fine, you effectively calibrate to a plane.