Kinect object tracking & ImagePack?

Hello again everyone.

Is it possible using a Kinect to track light or non human objects?

I wish to have a dark room with a glowing ball with the kinect tracking the glowing object in the dark over anything else?

I guess something like this is what I’m after:

If so how would I go about doing this, I’ve only really experience of using Skeleton with the Kinect, no light or object tracking.

Thank you for any help.

Hi Modified,

it should be possible to track a glowing ball with Kinect, as long as it does not release too much IR light (which it probably does not as it uses LEDs emitting visible spectrum). Kinect and especially Kinect One is able to track even a small objects in 3D space.

Try kinect-hitboxes-dx11 , box0 returns mean XYZ coordinates of the colliding objects. See this sample from a workshop

Will there be any other objects close to the ball? (Such as your hands juggling with the ball or poi). How fast is the object going to move? Kinect One performs much better while tracking fast moving small objects compared to first version.

If you do not need a 3D coordinates of the object you may also want to consider using some 2D tracking technique such as contour/blob tracking.

Hi id144,

Thank you very much for your reply, it’s very helpful to start on this.

There will be other objects in the room, primarily people interacting with and throwing the ball. I do also need it to be 3d.

The project in basic is a dark room with a lit ball. This ball is to be bounced or thrown around the room with the kinect tracking it.
There will be multi channel sound around the room with sound movement relative to the position of the ball (Kinect data sent from OSC to Max/MSP).

Sounds like a complicated tracking setup, you will get lot of occlusions, same with projections. You might also need a high frame-rate to track the short moments when the ball bounces off the wall.
I suggest you tackle these challenges with design of the interaction. One possibility I see is to leave people throw the ball into the area with the tracking, but do not let them enter that area.

Thank you once again for your reply :)

Hmmm, I would really like to have people enter the area. I wish there to be as much immersion as possible.
I do have multiple Kinects available so will be able to track from various angles which should eliminate the issue occlusions. This is also why I wish it to be a glowing ball, so that bodies are excluded in the dark.
The sound is to go around the participants, so it wouldn’t have the same effect if they were exterior to the installation.

If the Kinect isn’t able to track a glowing ball in a dark room, would something like the Eyetoy be better?
I know that’s not 3d but a couple of them could be used at the same time to get the same effect.

I’d rather be using the Kinect though.

There is great implementation of OpenCv made by ElliotWoods, called imagePack vvvv.packs.image

Based on the demo Kinect 3D Projectile Tracker it should be easy to track the ball, even without the light. There is even a prediction of the path implemented, that may help you to detect collision with the wall.

Hi ID144,

Thanks again for your help. I have the image packs though I’m not entirely sure what’s what with it.

Which would be the best node for tracking in the manner in which I would like?
I can’t find anything about Kinect 3D Projectile Tracker.

Thank you once again.

if u isolate ppl from ball that would be easy…
if the ball brighter then back also no problem

Hey guys,

Thanks a lot for your replies.

I tried isolating the ball from the people while testing yesterday. I am using HSCB and Levels on the output of the Kinect to isolate the glowing ball, and then using ColorTracker to follow the isolated blob. I assume this is what’s meant by isolating the ball.
The issue with my attempt so far is it’s not giving me a 3d image, and depth is quite important for this project. I have used 2 cameras for now but it’s not very accurate nor smooth. I’ve attached this patch.

OpenCV/ImagePack is looking to be the best way to go about this, I’m just having trouble knowing where to start with the ImagePack.

Thanks again folks.

2kinect.v4p (45.1 kB)

hi, yea that’s kinda sounds right, but instead of an colortracker u have to take pipet comopute shader, and make it output the position of bright pixels, might be i’m mistaken but i know somewhere on forum or just someone from the comunity have this shader already…
then u have to sample depth map by position of this coords to have depth over there… This is really not that much of a problem… To do that… sadly i’ll be busy for few days…

Thanks a lot for your reply Antokhio.

This might be a silly question, but is the Compute Pipet Shader working on a 2d rendered version of what is being picked up by the Kinect? Such as in my Kinect patch using the colour tracker?

I’ve actually found about 3 different versions of a CP shader and not sure which. Would it be the same as using the pipet node, or is it a different kettle of fish?
Also, is using the [Depth(Kinect Microsoft) [DX11, texture, vux](Depth(Kinect Microsoft) [DX11, texture, vux)] node the correct way to get a depth map?

Thanks again and apologies for the noobness, I’m very new to all this.

Out of interest, would a Kinect 2 be better for this project?

yo… if u track ball by RGB u have to do depth align, kinect 2 provides this texture for seamless align RGB and depth, however RGB camera on kinect 2 much slower in dark places…
sry still have to make my head around to do u this shader but specs for ur finall setup are quite vital

Have a look at the stuff I posted OpenCV PingPong Tracking. It’s not really a solution to your problem yet but could be a starting point.

Sounds like a fun idea!

I have not messed with kinect 2, maybe its possible to lock exposure to get (real) constant 30 FPS? Would also lower noise and make it easier to extract ball from people.

Hope you post some clips and patches once things are up and running.

Hi and thanks Meierhans :)

I bought a Kinect 2 but not got a USB adapter for it at the moment so I will try locking exposure, that seems like it would be quite accurate. Would it give 3d position data, I only have X & Y axis data at the moment (from 1 camera) and really want to get the Z in as well.