Depthkit integration

Been fidling with . its the old rgbd software they made the clouds documentary with.
Its a neat software to calibrate your kinect and slr, film a scene and export in various formats. All these can be done in v4 but it works quite nice, well worth the 100 euro artists license.
Trying to find a way to integrate the available export formats for a realtime V4 situation changing between loads of recorded clips (im in a VR movie type of scenario)

Solution 1 : OBJ and image sequences. This could be ok except a bit of a hassle when dealing with sounds.

Solution 2 : Depthkit can export a nice little video containing depth and texture data. This is way more optimized discarding a lot of unnecessary stuff. It looks like that. sorry for the pyjama rocker style. (4.2 KB)

It would be great to extract the 3d data from this texture, but i dont have this amount of shader juice in my fridge.
Depthkit includes some shaders doing this job in unity. attached here if anyone understands better whats going on.
Working with that and the HAP player for example would be a great way to load tons of filmed animations, with embedded sounds etc.

Thats all, maybe someone could be interested helping integrate this tool with v4.

I totally support this! Been messing with Kinect2 Studio recordings and SuperPhysical and its amazing what you can do. If we could get multiple recordings with sound into HAP player we could create virtual stages with many people on it.

Hi Levi!

We used DepthKit in our recent project virtual reality and dance

Also check where we have our custom shader that is working with hue encoded depth with remapped RGB. Inspect the website to get the source codes :D
For the installation version of the project, we use compressed but unencoded DDS sequences with RGB and Depth.

See this video from Elliotwoods on how RGB+D in DepthKit is made.

In the frame you‘ve posted, you have the per pixel combined texture, where RGB is already mapped to the Depth space. That is quite straight forward to use:
1 - Remap the Hue encoded depth back to 16-bit red channel, multiply and add R channel to adjust for the depth defined range.
2 - Now extrude mesh with the decoded depth and apply texture to the existing UV coordinates.

In our project, in the end we abandoned the combined per pixel texture approach, as the official version of the DepthKit is only able to remap the RGB into the Depth space in the resolution of the Depth, means we have got 512x424 texture instead of (almost) FullHD. In VR this was very low resolution - around 2DPI(two dots per inch). It also made me wonder what is the point of using DSLR if you end up with lower image quality than using built-in camera of Kinect V2.

So we made a patch that is reusing the calibration input data and projecting the texture to the extruded mesh.

Regarding OBJ sequence, it is easy to use, but we found there is a memory leak in GeometryFile node, after the few minutes of loop reading the sequence, the frame rate dropped from 90 fps to 15fps. Also the fidelity of the OBJ model was not satisfying.

Also as a side note, if small disk size of the RGBD data is not a maximum priority, than just use the Kinect Tools, record and playback. It’s easy to use and you get to playback your recording session instantly. You will need an SSD and the recordings and huge. One issue with the Kinect v2 RGB cam is that you can not requlate the exposure and somethimes you end up with overexposed capture subject, in this case, just requlate the brightness of the background so your subject is exposed as desired.

Hello, thanks a lot for your post. your project looks great.
would love to see it in VR if you ever bring it near berlin-

The good thing about the combined texture is that you can keep a normal video or sound editing softwares workflow that is kind of hard to do with DDS, OBJ, or the kinect tools.
There is also a 1080 combined image so i guess that would solve the resolution issues, with minor modifications to the shader.

I made some efforts to port the unity shader for the per pixel combined.
Its working somehow but still needs work. (attached here)
Perhaps the Hue extrusion would be easier, you are right. Not sure how to treat the R channel ?

A geometry shader solution would actually be ideal to make noodling possible.

Couldnt find those , Could you post a link perhaps ? (1.9 MB)

Here‘s the shader, it’s far from perfect, but I hope it will help you anyway.
You would be better of by using some of the Kinect shaders that are using the raytable.

You can actually see the web site in WebVR already :), but of course, the real experience is something much more different.

Good luck with the minor modification of the 1080p combined image. Notice, the RGB is unaligned. You need to undistort it and project it into the depth space using the correnspondence parameters.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.