Shared textures: howto synchronize 2 vvvv instances


dx9ex shared textures are great, because they allow you to have 1 vvvv instance generating frames (I will call this the server), and another 1 displaying them (the client). This would make switching patches in a separate instance possible, thus keeping the ‘main’ framerate pretty constant.

Now I created some simple patches, where 1 of them generates frames, and the other one displays the texture. The generated frames use a DX9Texture node to turn them into a shared texture.

I was wondering if this is completely ‘safe’. I mean:

  • can i assume that the texture is locked as long as it is been drawn to? Or is it possible that the client displays a half-drawn frame? (I seem to have noticed some strange frames, but now they seem to be gone)
  • if it’s a heavy scene, would that mean the texture will be locked for a long time by the server, and would that slow down the client as a result?
  • if so, would we need some kind of mutiple buffering mechanism if we were to build something like Syphon for Windows?

Who can shed some light on these questions?

I work with shared textures for a quite long time now but I never had this locking stuff. I guess it’s already taken care of inside DirectX it displayed only full ready textures. if the client is faster and request the slower server’s texture it will simply display the last state of that texture without slowing down vvvv. So no need to worry about this stuff

i have no further experience with this myself but i quote:


But as stated here (, there might be no problem at all, if you set your renderer to ‘Discard’.

Not entirely sure, but it seems that for textures, it is always overwritten, but for other buffers, they return a new memory area (this would be superb if it worked that way for textures too).
So my guess is, the trick will be to keep the locking as short as possible, maybe by rendering a heavy scene to a texture first, and then rendering 1 simple quad with this texture to the shared texture.
Is that what already happens now, if you connect a DX9Texture node to the renderer, or will it do only 1 pass render straight to the output texture?

Just trying to understand what happens, and wondering what would be a good way to make a ‘Syphon for Windows’ one day ;)