interesting project. Using Kinect to detect hand positions is going to be complicated (for me at at least). Kinect works best when detecting body gestures, near mode enables the body to become closer to the Kinect, but does not enable the tracking of individual fingers.
A possible solution would be for you to investigate Detect Object, Detect Object cane be trained to recognise hands in certain positions.
Other hardware options soon to be released include Leap Motion (designed for hand tracking, reviews are not good at the moment, let’s see what happens on release) and Intel’s Perceptual Computing (to be released soon).
Obvious Engine looks really cool - I’ve never tested though, might be worth contacting the the company behind it:
Also there’s a $P Point Cloud Recognizer which ethermammoth ported to vvvv: p-point-cloud-recognizer, you could try to just route all detected hand points (fingertips, knuckles, whatever) through that for detection. Or maybe even detect the hand and route the contour.
Leap Motion is probably no use for you as it works nearly exclusively for detecting downward facing gestures (because the camera is below your hands, not in front).
Pablo hand tracking and finger tracking are very different.
The demo you sent is nice that it shows how many fingers are being held up, but that really is the limit. If the person demoing the software instead pointed their hand directly at the Kinect and pointed one or two fingers at the Kinect the same result of zero would be given.
Can you post some example photos of hands in the positions you would like to detect?
Here is an older 3D tracking of complete hand shape, and its articulation:
Yes there are many hand recognition development out there using Kinect but what you need next is to implement an artificial intelligence system that its algorithm translates the different recognised shapes to letters, words and language. Of course you could use already existing gesture and hand pose recognition, for example: