Possible To Use Two Kinect 2 Sensors Now?

Any new solutions to getting two Kinect 2 sensors working in the same patch.

Seems like all the forum posts I read were 5+ years old and either dealt with the Kinect 1 or made it seem like this was not possible…

Any news on the front?

Thanks!

Rumors say you can have only pointclouds from two kinects if both connected to separate usb3 controllers, but haven’t tested…

1 Like

Yeah, I saw some mention of that but nothing looked confirmed. Hacking away at it, all suggestions welcome :D

I have a second PC. If I was to go that route (one Kinect per PC) how would i go about merging the point clouds? UDP?

There is Texture as Raw, then optional compression (gzip, or something else there is) > then yea, UDP should be best option… then texture from Raw…

1 Like

Thanks! Starting to get some results :D

Yes you can use more than one K2 per PC, if you use the OpenNI2 library and the libfreenect2 driver. And you’ll need each on their own USB3 controller for best bandwidth. I’ve run four on one PC this way using a quad bus USB3 adapter from Startech.

And concerning using multiple PCs, I strongly recommend converting the depth texture to world XYZ integer millimeteres and sending it in that form, as it will usually result in less data being sent, plus that way each camera PC can do the texture to world points conversion so your master PC does not have to do it for all of them, a huge processing win - it just adds the clouds all together.

I’ve used these methods to use up to eight K2s covering very large areas.

2 Likes

Thanks @mediadog! Sounds very cool, can’t wait to try it out (although, I’m not sure if i have a multi-bus MB…) But nonetheless will give me a good heading :D

A note about sending XYZ points instead of the texture: Sending the pointcloud works out to be less data in my applications, as I am usually doing things like user isolation and/or background subtraction as well as usually doing subsampling. Obviously if you end up using more than a third of the R16 depth image pixels, then sending the pointcloud will be more data. That being said, even then it may be worth it so you do the compute intensive stuff on each camera PC.

Concerning using OpenNI2 and libfreenect2, I wrote my own OpenNI2 wrapper plugin many years ago and have been using that since, so not sure of the current state of the Kinect (OpenNI) node or support in VL or the pointcloud pack. In any case I think you should be able to use the OpenNI2 .dll directly in VL now. You just need to copy the libfreenect2 drivers over to the OpenNI2 drivers folder.

(Edit) Oh and that Startech card just needs one x4 PCI Express slot, so will work in any x16 slot a graphics card will work in.

Oh and one other plus of using OpenNI2, is you can not only use multiple cameras, but mix and match as well. So you can use K2s for overall coverage, a short range Primesense or Intel Realsense for specific areas for things like hands, etc. I have used K1s, K2s, all the Primesense and ASUS XTions (1 & 2), Orbbecs, and a few of the Intels. Sadly the ZED I had to write a specific driver for.

thanks @mediadog :D

My rig needs an update so I’ll be looking into that Startech USB adaptor!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.