OrbecAstraDotNet VL Implementation - Problem with texture format

Hi, we are implementing the AstraDotNet library (GitHub - bibigone/AstraDotNetDemo: Simple .Net solution to demonstrate working with Orbbec Astra (Pro) depth sensors from C#) for VL and it is working out very well, except for one problem. The Astra SDK is providing the depth data, color data etc. as “streams”. Those can be listened to in an Async Region and a “Frame” can be retrieved, that basiacally is an Int Pointer, that can be interpreted as a texture. So far so good, everything works for color and detph. However what I really need is the “PointStream” which is the world position in mm and also has the lens undistorted. This stream works exactly like the others, except that the IntPtr points to a Vector3. So everything is sorted incorrectly.
Now the question: Is there a way to control how data from a pointer is interpreted as an image?

The PointStream with a DataPtr as Vector3:

And the result as an image:

Here the ColorStream, which is working correctly:

Note, that the StreamToImage is doing the exact same thing as in the above screenshot.

In c++ the same thing works something like this:

astra_rgb_pixel_t* vizBuffer = visualizer_.get_output();
for (int i = 0; i < width * height; i++)
{
int rgbaOffset = i * 4;
displayBuffer_[rgbaOffset] = vizBuffer[i].r;
displayBuffer_[rgbaOffset + 1] = vizBuffer[i].b;
displayBuffer_[rgbaOffset + 2] = vizBuffer[i].g;
displayBuffer_[rgbaOffset + 3] = 255;
}
texture_.update(displayBuffer_.get());

As you can see there is some reallocation going on here.

Any hints?

So the point stream is not a 8 bit 3 channel BGR byte format, but a 32 bit 3 channel float format? If that’s the case we need to add this format on our end…

Yes, that’s what we are suspecting. Might be only 8bit though. The documentation is unfortunately quite bad and we can’t find more info.

just to make sure you found this to start from: https://github.com/vvvv/VL.Devices.Astra

yes, I think it’s based on that. @sebl started the project and we are extending it to work with multiple devices and are trying to implement the PointStream

the documentation says the pointstream is vector3f, meaning each component being of type float. do you want need that as a spread or rather a texture?

no, we need this as a texture, similar to the depth stream. It should look like this:

This is from the Astra OpenNi Contribution.
The thing is, that this world position is in theory also possible to derive from the depth, but the SDK provides a much better version with PointStream, which is lens undistorted, aspect corrected and in mm.

You can also try to interpret it as a structured buffer of vector 3… That might be the easiest way and wouldn’t need an extra CPU copy to convert the format.

yes, thought of that aswell, but can’t get it to work for some reason:
image

image

the data behind the pointer might not be valid anymore when the buffer uploads… do you know how memory management of the data should work? do you have to dispose the data yourself or does the camera dispose it and you need to copy the data on receive?

well, the data seems to be valid when I upload it as a texture, so I guess that’s not the problem. Or does the ToImage Node already copy the data? The ReaderFrame is disposed as you can see in the third screenshot of my inital post. Other then that I’m not sure about memory management.

yes, the patches above all do CPU copies when converting the format and then own the memory themselves. they do the copy before you dispose the ReaderFrame. my guess would be that disposing the ReaderFrame will invalidate the memory. a cheap trick is to use a FrameDelay and dispose the ReaderFrame from the last frame. this way you make sure that the data is valid in the complete vvvv frame.

The solution was to copy the PointFrame before disposing it and uploading it as a buffer, instead of a texture.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.