VL.IO.NDI

,

aye randall!

i fear NDI is not your weapon of choice for these kind things.
as you already assumed, NDI is applying some agressive, proprietary compression on the image to achieve a low bandwith (i.e. several fullHD streams over gigabit ethernet are possible). it is highly optimized for the CPUs vector extensions (SSE etc.) to for low latency video streaming. i’m not sure if you have a lot of choice regarding the transmitted video formats but i’m pretty sure it preferres “visual quality” over “technical correctness” and assumes plain broadcast video images…

anyways, i would have to look into this deeper to be sure (an update to NDI 3.8 is nescessary anyways - when there is time for that i will have a look).

if you want to share on the same machine, shared GPU textures are the way to go. they have no delay, no copy and no compression. or do you need CPU pixel data?

I think he be needin to share data between multiple gpus ;)

I’ve used Spout to do this, PNG compression, encoding the 16bits of 1 channel into 2 8bit channels in a shader and sending that via spout, all the other formats are too lossy to work, not sure if it would cope with HD or not, mine were depth textures from a kinect. ( @ravazquez )

@ravazquez
The ndi thread gets a bit derailed but couldn’t you do something like PixelData (DX11.Texture 2d) → Writer (Raw SharedMemory) → Reader (Raw SharedMemory) → AsTexture (DX11.Texture 2d Raw).

It works in principle (see patch) but the combination PixelData / AsTexture is borked, something wrong with calculation of the stride it seems.

SharedMemTex.v4p (19.1 KB)

@catweasel - that wont work between gpus though right?

@bjoern - that’s mega heavy - coming back to the cpu

@mrboni
Well I know. But it’s the same for NDI, isn’t it?

@mrboni you can run spout locally, so yes, but I guess you can’t share texture between them, never tried tbh. I’d guess that if NDI needs a texture through VL via CPU, using spout would be less taxing, as it gets the shared texture in another thread, and then shares that via shared memory on the receiver. But tbh you’d have to test it, to be sure, and let us now ;)

Anyone got this running with a recent gamma?

@tobyk i can confirm this needs an update. it seems some nodes have changed in some base libraries (e.g. “Pointer” in vl.imaging appears to be not there any more - don’t know if this is just a namechange or something else…).
i don’t have a lot of time at hand right now but i’ll look into this when i can…

@motzi the image now has a property Bytes that you can pin in order to get the pointer. this is a change I did a few days ago to adapt to the new design:

image

@tonfilm Watch out, you need to unpin! using (var handle = data.Bytes.Pin())

@motzi I can make the necessary changes to your library but not sure how - downloading and uploading a zip file seems off? Or this the way to go here? No repository?

1 Like

@Elias I actually do have this in a private repo here because the DLL used in the VL implementation is based on is a modified example from the SDK. However, the license for the SDK allows only redistribution of headers and redistributable files, therefore I was hesitant at that time. (i might also just write them to ask if it was ok to opensource the example…).

I’ll split VL and the DLL into two projects, make the VL public and give you access to the DLL as well. It seems the DLL needs an update anyway as there were some minor changes in the naming within NDI. it just might take a day or so…

sorry for the inconvenience…

1 Like

@motzi @tonfilm @elias thanks for having a look at this, appreciate your time!

@tobyk: could you try this and tell me if it works for you? it does not contain a sender node yet but is updated to the latest NDI version (4.5) which gives improved perfomance.

VL.IO.NDI_200408.7z (1.4 MB)

3 Likes

@motzi thanks for your quick work! The receiver is working great.

image

a super naive sender is just these few nodes:

image

i just was surprised that it just worked without any tinkering. of course one does want to tinker a bit to make it async, i guess

4 Likes

Yeah, the Sender base-nodes are still there from the last release. Since i did restructure the thing internally quite a bit I did not bother to create a Sender as i figured that a receiver would be more important.

However, a release on Github is imminent since the NewTek people responded positively regarding opensource publishing

5 Likes

I spent some more time on this and now released everything on Github:

Furthermore here is a ready-to-use version of the whole thing with the latest changes
VL.IO.NDI_200413.7z (1.8 MB)

For future releases there will be a nuget pipeline that I still have to set up.

The latest changes feature some performance improvements for the Receiver and experimental Sender nodes. Caution with those - they are not synced to any Renderer and the NDI videostreams are prone to jitter.

  • Sender: Blocking version that will throttle the patch according to the set frame rate
  • ReactiveSender: This one is clocked by a BusyWaitTimer internally and therefore processes frame sending on a different thread. Beware: Closing the patch while still sending video will crash VVVV (I did not figure out how to safely check if frames/sender are already disposed in the other thread).

The whole Sending topic will need more research (and probably some help by the devs - e.g. is it possibly to get a sync info from the Renderer?). Also i just noticed there is an async version of the NDI send function.

Anyone interested in joining in the development is welcome as I don’t always have time for this. There is still a lot to be done (proper sending, audio and metadata support, timecode, …).

Enjoy!

6 Likes

now this is also available as a nuget:

5 Likes