I am busy cleaning it up into something not TOO embarassing... and I am still fighting some library path resolution issues needed to make it portable.
Note this will not be a node-for-node replacement for the previous OpenNI nodes, as I took a different approach way back when the Kinect first came out and my stuff depends on that legacy. So right now it is a single node that outputs a DX11 depth texture, and optionally will output spreads of Z data and/or world 3D points.
Now though I have DX11 compute shader for generating the pointcloud for doing the world conversion, which besides being much faster takes as inputs a camera transform and both include and exclude area transforms. Again this was done before things like the pointcloud pack came out, so it's a bit idiosyncratic.
Also, with my legacy OpenNI 1 plugin, it output an OpenNI context, so it could be used with things like the existing OpenNI RGB node, but with this one those nodes don't exist so I need to make one of those which I have not done yet.
On the plus side, I am working on making this plugin handle spreads for multiple cameras.
But maybe at least someone can take this code and put it into something more familiar; gimme a few more days. If you get really desperate I can email you a zip to test with, I could use the feedback!