Real Time Face Tracking / Projection / Animation

Hello Everybody :)

I am a filmmaker from germany.
I am looking for somebody who can do ‘Real Time Face Tracking’ / Projection
It will be for a musicvideo.

Here is a reference:

The Job includes planning + organizing software/hardware + being on set to operate the projection

++ producing the content /animate (this could also be a second employment and will be in close cooperation with me)

The video will be shot in Germany.
We will be able to pay travel cost within Europe.

Preproduction Time is approximately 4 weeks from now on.

Preproduction can happen remotely.

Feel free to contact me for more information:
fabianschuhsohle@gmail.com
Tel. +49 163 1872550

Greetings and have a nice Weekend :)

I would suggest compositing animation onto face after shooting rather than face projection. This technical level of face projection is very very hard and costly to do - I remember reading about process behind the reference and it was really not easy.

1 Like

Hello StiX,
thank you for the reply!
I’ll wait with a decision till next week. Maybe I get more opinions.

But anyway:
Do you have a recommendation for somebody how can do a composite animation afterwards?
And do you maybe have a reference where this was done?
Would be great.
Greetings

Must vvvv be the tool used on this project? Are we talking about live texture map on top of the facetrack mocap animation?

I think StiX is pretty much on point. It’s impossible to achieve these results with normal off-the-shelf hardware. Here are two links to a commercialized version of the projector that they (most likely) used:

You can try to get a quote from them but I really doubt it will fit your budget.

From the description of an other video from the same studio (emphasis mine):

The face mapping system made it possible to follow intense performances, which was impossible until now, thanks to the use of the state of the art 1000 fps projector DynaFlash1 and ultra high speed sensing. The initial dilemma of speeding up the tracking to the detriment of performance latitude was resolved by the WOW team, Professor Watanabe, and Tomoaki Teshima (EXVISION), who trimmed several milliseconds during a trial and error period that lasted approximately three months, enabling the completion of this system2. The projected image looks like it is integrated into part of the skin, and the expressions on a subject’s face, when it is distorted or transformed, are exponentially enhanced.

*1: Jointly developed by the Ishikawa Watanabe Laboratory at the University of Tokyo and Tokyo Electron Device, and commercialized by Tokyo Electron Device.
*2: Dynamic projection mapping technology, developed by the Ishikawa Watanabe Laboratory at the University of Tokyo, was used for hand tracking and projection mapping. Face mapping technology, developed by WOW Inc., was used for facial tracking.

[Source]

2 Likes

Looking at the video you posted once more, the dancer/model/actress doesn’t seem to move at all, only the camera does.
(Also the projection is colored and and afaiu the dynaflash projector is only capable of b&w.)

If this will also be the case in your video it makes things quite a bit easier, since lag won’t matter that much. Still, doing the stuff in post will be more efficient I guess.

@Moykul vvvv is not essential.

‘Are we talking about live texture map on top of the facetrack mocap animation?’
Sounds right but I guess there are different ways to achieve something similar and I am open for ideas. Aside from that I am aware of the limitations due to the very expensive hardware as well the time given.
(3 weeks from now on)

@bjoern Thank you very much for your answer!

  • black and white would be ok.
    The actor actually should move. - he is singing… but if that’s too hard to achieve we have to adapt.

I think getting the hardware is the biggest problem. is there something similar to the dynaflash that is available in Europe?

Any particular reason you want to do it in “realtime” not post?

I guess the lighting of the projection + I will shoot in different formats. vhs + 16mm.
I rather have a not so perfect projection than weeks of post.

Just remembered this project by moment factory.

They were using a system by Panasonic. It’s not as fast as DynaFlash but maybe still fast enough:

Contact info for Panasonic in Germany/Austria:

I’ve done something like this with the face HD tracker node with a Kinect, slightly laggy but works ok, if anyone in DE wants to give it a shot. It was just a test for a potential theatre piece, but the mesh stuck pretty well to the face, and distorted with the mouth.

1 Like

@catweasel
Hi, I’m a student from germany and we are currently trying to build something similar with vvvv gamma and a Kinect2 but we are relatively new to this and have pretty much no idea where to start / what to do.
Would you mind sharing your patch with us or tell us how to use the data the kinect / the faceHD node spits out?

Hi Scroll, I used vvvv beta to do this, it comes with a kinect mesh for the face, which means you can use it directly, you need to calibrate your kinect to the projector, and there is a help patch for that, (it might be one of the particle libraries) in gamma, I’m afraid it will take a bit more work to gather all the parts. I’ll see if I can find my face tracking patch though for you to have a look at.

Ok, I’ve found my patch, I was actually using homography to line up the mesh and the face, I’ll have to go through and see what I can include, also it seems I wasn’t using faceHD, which gives you facial expressions, but a straight mesh. So it would probably be worth starting from scratch and using the faceHD and try lining up with homography (4 point corner warping) as long as your subject is fairly static, that should be fine, you effectively calibrate to a plane.

1 Like

Thank you very much catweasel, we have a somewhat working patch for now and we’ll see how far we can come with your suggestions.

1 Like