let me explain my dream. i wish that i could use a shared memory location to output video from vvvv into resolume avenue, which uses opengl for rendering. my plan is to create an ffgl source plugin that reads from shared memory and renders into a texture.
can anyone tell me if this sounds feasible? what would be the best way to convert the media sample that gets written into the shared location with VideoOut Shared Memory (and then read again by the plugin) into a valid opengl format?
if anyone else is interested in working on this, great! as you see, i am not very clear on how to pull it together ;)
that should actually be quite straight forward to do, apart from not being too performant in the end…
the VideoOut (DShow9 SharedMemory) simply writes the current directshow mediasample to the specified named memory location. directshow mediasamples can have different formats, like YUV, RGB24, RGB32 and many more… i don’t know OPENGL but i guess it offers routines for creating textures from memorylocations if you tell it what format the memory has (e.g. RGB24). so your ffgl source would only need to read the shared memorylocation and force its content into an opengl texture. goodluck!
i actually talked about this with woei a bit (cause i’d also like to have a possibility to stream 4v to avenue).
he also guessed (iirc) that it may be better to get the texture directly from the GPU backbuffer, then you would spare yourself the detour to the CPU RAM, and get performance. didn’t look into it yet, though :-(
wanna join forces, although i know next to nothing about FFGL and OPenGL? :D
i’ve commited an earlier FFGL source plugin i’ve made as a base. note that the XCode project and linux makefile are not up to date (should work, but include ghost projects). the vs project should be.
glad to have you on board!