I would like to use the real time mapping technique from Elliot Woods and track the skeleton in the same time.
My problem is that the skeleton tracking, don’t know why but doesn’t work with wwww 26, although Elliot Woods patch has a lot of errors in vvvv 27…
What would be the solution to make them work on the same program…I trie to open the 26 and 27 version at the same time, but it doesn’t work.
any help would be appreciated.
i’ve never actually used skeleton tracking in vvvv to be honest
on the Kinect Hadouken demo i used ‘hand tracking’ which is a feature of OpenNI and doesn’t require a full skeleton (i.e. no calibration pose)
this is a good place to start
that’ll teach you what’s going on and what to expect
also there’s a package of older opencv plugins and modules there which were used to make the instruction videos
not sure if they’re vvvv27 friendly or not sorry! but maybe check it out in vvvv26 otherwise, and then perhaps just save out the matrix calibrations afterwards to run in vvvv27
that’s what I did, store and recall matrix transform; my next step is to transform the skeleton to mapp it on people ouh yeah ;)
there 's a problem to adapt the patch in 26.1 to 27.1;
I use your context and image objects to calibrate the system in 26.1, save the transform matrix.
I import this matrix into 27.1 where I use kinect (skeleton rgb and depth).
first: the kinect object gives me the mirror of context image from 26.1;
so I scale-x in the cameracoords node of runtime.
As you can see there’s an offset between the two render, what is more this offset doesn’t seem to be always the same…would’t it be a problem linked to the finale renderer resolution ? don’t know…really lost…
here’s pictures from 26.1 and 27.1 to illustrate the offset
original good one from 26.1 :
wich becomes this in 27.1:grrrr
hmm if you have your values thru getmatix setmatrix they should be same, try to setup helpers system in you 26.1 patch, so you know how physically it should look like, but if it is looks inverted i suspect there is some difference in cords you receive from kinect in b26.1 and 27.1 sadly i stuck on MS drivers so can’t suggest you much
it becomes really an emergency for me to find a way out!
maybe i’ve found the reason :
The depth node from 27 apply the kinect fov.
Don’t you think this step should be escaped because the calibration done in 26b takes already care about kinect perspective ?
How could I redistort the fov of depth node in27b to the depth image fit my depth calibration in 26b ? I can’t modify values in the inspector.
would some of you have datas about depth from hierro pulgins and from elliot woods depth context ?
I can see that depth from hierro include fov, like depth from elliot or not ?
I would like to see where the difference may come from to distort the depth image between 26b and 27b
fov you can fake with perspective, also perspective transform kinect might be what you are looking for
I try to apply inverse pespective kinect to the depth…no results…I would like to understand things about this !
think it’s more like to the renderer