Let’s suppose I have two PCs, a master one and a slave one.
On the slave one I have an Intel RealSense camera connected and a python script capable of grabbing the depth image and send it over net via TCP (I’ve to develop it yet but I think I will take some inspiration from this code).
On the master I’m running a Gamma patch and I would like to get those data from this incoming TCP connection and feed all the postprocessing depth filter with them. I think that some clue about the “crunching” of those data can be extracterd from the server.py code here but I don’t know where I can start.
the problem with your approach is that what comes out of the RealSense node (and is expected by subsequent downstream nodes) is not an “image” that you can easily encode and send over the network as the python script you reference shows. it is rather a type called FrameSet that holds much more info than only the depth image.
is there a reason want to do all the processing on the receiver and not the sender already? because if you did it on the sender already, you could then simply send the resulting depth image.
Yo are not gonna deal with that without custom plug…
You need to convert image to byte sequnce encoded in jpg, better dds (on sender)
Send byte sequence over TCP
Receive byte sequence over TCP
Using some dx method convert bytes to image (for instance this one Texture.FromStream(Device,Stream,Usage,Pool) | Microsoft Docs) but that should work in stride context…
if you don’t want to deal with code, you have to use Spout TCP or something similar
i don’t quite understand what the fact that the sender is on linux would have to do with the question on which end you’d do the post-processing.
doing the post-processing on the master using the RealSense nodes will be more difficult because as pointed out above, the filter nodes operate on a datatype called FrameSet which would be harder to transport via network.
so but yes, if you do the processing already on the sender, then you only send the final image and that you can then decode depending on the encoding. eg. using the ImageDecoder [Advanced] node that comes with skia.
Thank you for all the suggestions provided.
I think that, related to what we are talking, we can add to the discussion one more element here.
I’ve just found a project on the official Intel RealSense documentation which uses two different tools:
rs-server (only for linux machines);
realsense2-net (available for different platforms);
To share data and images from a realsense connected to a networked PC with a second PC.
Although transmission bandwidth issues are obvious (USB3 is much faster than GigaBit ethernet), I was able to “stream” data from a linux PC (with RealSense connected) to a RealSense Viewer on Windows 10 on the same local network.