Possible To Use Two Kinect 2 Sensors Now?

A note about sending XYZ points instead of the texture: Sending the pointcloud works out to be less data in my applications, as I am usually doing things like user isolation and/or background subtraction as well as usually doing subsampling. Obviously if you end up using more than a third of the R16 depth image pixels, then sending the pointcloud will be more data. That being said, even then it may be worth it so you do the compute intensive stuff on each camera PC.

Concerning using OpenNI2 and libfreenect2, I wrote my own OpenNI2 wrapper plugin many years ago and have been using that since, so not sure of the current state of the Kinect (OpenNI) node or support in VL or the pointcloud pack. In any case I think you should be able to use the OpenNI2 .dll directly in VL now. You just need to copy the libfreenect2 drivers over to the OpenNI2 drivers folder.

(Edit) Oh and that Startech card just needs one x4 PCI Express slot, so will work in any x16 slot a graphics card will work in.