Greetings all… I have a question about the X,Y spread that the Facetracker vertices pin outputs.
I see it is outputting a a spread of 198 values, i assume this is the XYZ values for the individual “dots” that are getting grouped with the videotexture as tracked positions of my face. I have an adv iobox (of 3 col by 66 rows) open, it’s hard to make sense of all the values. tough to just pick out my mouth motion for instance.
So… How do I know which slices go with which points? Does anyone have a list?
i want to trigger midi with different facial movements… in the past , i had Kyle Mcdonald’s faceosc running on a mac feed vvvv via osc and it was pretty easy to make it work. There were far fewer values coming across in osc, if I recall. i would like the whole installation running off of one computer and skip the osc communication. Thanks.
facetracker from alpha 28 does like faceosc.
you’ll find files called mesh.idx and mesh.tcx which are indices and vertices respectively. in the order you get them from facetracker. so you could visualize and number those to find which vertices/slices are useful for you.
Thanks guys… I came up with a different tactic to figure out which point is associated with the different parts of the face…
I am using getslice/setslice and a +value to add a value to individual slices of the big 198 spread. this way i can visually “scrub” the translation of each point in the renderer with the mouse. much easier to locate things like right eyebrow and lower lips, etc now.
I must say, every new project in vvvv is another learning experience, and each time it is fun to engineer.
Thanks for the help
@joreg, thanks for mentioning the file locations. I opened them up. it seemed like it would be tough to visualize… which led me to think of the above idea. thanks again.
for instance, this method made it really easy to find out that:
slice 72 is the middle of right eyebrow
slice 129 is top right eye
slice 171 is bottom center point of lower lip
here is a fuller picture of how i’m using a slider io to translate the points individually to find which slices correlate with which tracked points.
the litle bit about the translation slider is at the bottom-right. i hope it helps.
binsize 3 would join xyz coordinates per points
is there a way to lower the amount of trackers, for instance if you want to track only eyes ?
thanks alpa for sharing your patch !
ivandenko: not sure if the tracker can handle this but you could try to understand the format of the face2.tracker file that the node references and modify it (leaving only the eyes in). then see if the tracker still copes with it.
alternatively you can just leave that as is and only getslice the relevant coordinates out of the resulting spread.
“alternatively you can just leave that as is and only getslice the relevant coordinates out of the resulting spread.”
that is the way i am doing it.