as far as i know max msp is quite similar to vvvv (please correct me if i’m wrong), and the mx wendler website (a vj software done in max) states:
"…Why is mxwendler so fast?
MXwendler uses two key techniques: 1: exclusive image manipulation on graphics hardware. Not one single pixel is manipulated in this vj-software…"
what does this exactly mean and how is this achieved?
if i want to have 2 quads next to each other with the same video in it, do i spread the quads x coordinate or is it better to have 1 long quad and split the video on it using the transfrom texture option (scale)? or has this nothing to do with point 1? what abot scaling and mirroring videos, manipulations in general? do i use texture transform or “quad”-transform. a lot of things i don’t understand here yet…
“…2: Any video footage is converted on-the-fly into a special hardware-optimized codec, which does not require any further software decompression…”
What does fast mean?
Concerning pure render performance: normal software renderers add up to three layers in 320x240 resolution at typically 20-30 frames per second. MXwendler adds up to 16 layers of this size and still does 60-90 frames per second. 2. Notebook overall performance: if you try to mix two PAL-size videoclips, conventional systems fail because their disks cannot access two DV - streams simultaneously. MXwendler can handle these loads because of the special optimized codec…"
is there any way to achieve something similar in vvvv? what sort of codec is this? any ideas?
from your questions it seems that you should rather ask them in the mxw-forums.
but don’t tell them you think mxw is done in max…
"…Why is mxwendler so fast?
what does this exactly mean and how is this achieved?
they are doing everything in shaders. no big deal. jitter, gem, vvvv,…also let you use shaders. in vvvv look for the EX9.Effects category.
if i want to have 2 quads next to each other with the same video in it, do i spread the quads x coordinate or is it better to have 1 long quad and split the video on it using the transfrom texture option (scale)?
either will work. depending on your setup you’ll find out which one you’d rather use.
what abot scaling and mirroring videos, manipulations in general? do i use texture transform or “quad”-transform.
there is no general way. you choose whatever suits your needs best.
“…2: Any video footage is converted on-the-fly into a special hardware-optimized codec, which does not require any further software decompression…”
this sounds essentially as if they are loading everything into video ram.
is there any way to achieve something similar in vvvv?
ja, load everything in a Queue (EX9.Texture)
what sort of codec is this? any ideas?
probably they have some more tricks to share. ask them and let us know.
if i want to have 2 quads next to each other with the same video in it, do i spread the quads x coordinate or is it better to have 1 long quad and split the video on it using the transfrom texture option (scale)? or has this nothing to do with point 1?
yes it does, once you converted the DirectShow video stream into a texture (with the VideoTexture node) and display it on a quad, no matter how, its already handled by the graphics card. and that is so with any DX9 and EX9 object.
@joreg: i had in mind i’ve read on their page it was done in max. ups…anyway,
is there any way to achieve something similar in vvvv?
ja, load everything in a Queue (EX9.Texture)
how can i calculate how many frames can be loaded into my 512 mb vram? might be a stupid question, can i load more videos into the vram via queue in vvvv and acces them right? so what mxw seems to be doing is save all used frames compressed in vram, right? but what would be the realtime solution? i would have to stream the video once from disk an then it’s saved in vram, or could i save sets of videotextures (e.g. two videos saved as tectures before) to be loaded into vram? how would i play those saved frames?
i will definitely post some questions in the mxw forum and will let you know how they do it…if they tell me ;)
yes it does, once you converted the DirectShow video stream into a texture (with the VideoTexture node) and display it on a quad, no matter how, its already handled by the graphics card. and that is so with any DX9 and EX9 object
so when i read video from disk and then use the video texture node i on the one hand need cpu to play and read the video (or what for?) and on the other hand need gpu to show it, right? but why is playing back video or multiple videos such a power consuming process on the cpu side? to play the video? i don’t have that much of a programmers background so there might be some essential things unclear for me…and if i don’t have this problems if i’d load my video into vram like joreg suggests, i suppose what kills my cpu is copying every frame from disk to vram again and again? sorry if this are quite stupid questions…
how can i calculate how many frames can be loaded into my 512 mb vram?
512mb = 536870912byte
you get the byte of one image like this:
width x height x depth
320 x 240 x 3byte = 230400byte
536870912 / 230400 ~ 2330images
2330 / 25 ~ 93seconds of 25fps video
in fact it will be fewer because some other things like backbuffers also use some of the video memory.
Memory (DX9) gives you an estimate of currently free vram.
can i load more videos into the vram via queue in vvvv and acces them right?
right
so what mxw seems to be doing is save all used frames compressed in vram, right?
right, and that is what vvvv cant do. you wont get the texture queue to compress your textures. but you could load single frames with FileTexture and set its textureformat to one of the compressed formats (DXT1 - DXT5). like this you should get more than 90s of video into your 512mb.
or could i save sets of videotextures (e.g. two videos saved as tectures before) to be loaded into vram?
no. on every vvvv-start you’d have to stream the video once to a queue.
how would i play those saved frames?
use GetSlice (Node) to access individual slices of a queue
once you have videos as textures there is no more cpu-overhead with playback. only gpu, but that will not be a problem. you can easily access hundreds of images of the queue and display them at the same time.
but why is playing back video or multiple videos such a power consuming process on the cpu side? to play the video?
the eye needs 24 or more frames per second to see a smooth movement. one frame of a video has for instance 640x480 = 307200 pixel. multiplied by 24 is 7372800 pixel per second, this muliplied by 24 bit (RGB) is 176947200 bit and that is 21 MB per second for an uncompressed movie, that just has to be moved from the HD to the screen. a HD can provide around 50 MB per second.
so the usual way is to store this large amount of data in a compressed format to reduce the hd load. but this needs quite complex algorithms, which decode the compressed data on the fly during playback. so you can imagine how much operations it needs to calculate the 7 million pixels per second.
two videos just double the amount of cost. but making the video the double size 1280x960 means 4 times more pixels! so video size is the critical variable.
…and if i don’t have this problems if i’d load my video into vram like joreg suggests, i suppose what kills my cpu is copying every frame from disk to vram again and again?
the idea is to load all your material into the memory. if you are using the FileTexture node for this, you can use your ram too. the trick is, how, when and which data you load into the ram… see also: HowTo Prepare Textures
ok, one more number i just calculated. if you have 2GB ram, using the DXT1 format with filetexture, you can load around 30 minutes of 320x240 video into 1,5 GB of your ram. this means really random access to 40.000 frames with no delay.
so according to your posts and the link for preparing textures the best thing for fast multiple video playback would be preparing all videos as a sequence of jpgs, and load them into ram/vram if needed…?
how do i save from FileTextures to ram then instead of vram? and i guess i could even do both? yippie. this sounds quite good for me- as mixing several videos has always been ram intensive with any software i know, this could solve some of my problems concerning fps and #of videos/layers…
but i didn’t understand why loading to ram works with filetexture but not with videotexture…?
i tried to export one jpg via Write from a VideoTexture node connected to a 640x480 videostream. the jpeg resolution exported was 1024x512 pixels. so i suppose it would be better to convert my 640x480 footage to 1024x512 jpgs, or should i do it at 512x512 (as i originally meant to do after reading the link above)?
thanks for help; i’m still not sure if i should start to convert my videos to sequences (as it’s hell of a work), but this is quite an interesting option. cheers.
so according to your posts and the link for preparing textures the best thing for fast multiple video playback would be preparing all videos as a sequence of jpgs
if you need to jumparound in the videos and not only need plain playback, then yes
how do i save from FileTextures to ram then instead of vram?
you cannot choose. directx does that for you. textures visible on screen are in vram, others only preloaded in ram. make sure you use beta13.1, there was little bug in beta13 concerning this.
but i didn’t understand why loading to ram works with filetexture but not with videotexture…?
videotexture can only be loaded to vram via queue (ex9.texture). loading videotextures to ram is theoretically possible, but not currently with vvvv.
i tried to export one jpg via Write from a VideoTexture node connected to a 640x480 videostream. the jpeg resolution exported was 1024x512 pixels
videotexture has a pin that controlls texturesize. try the different settings there.
there’s only one thing still unclear to me, concerning texture size. if i set the pin you refer to to nonpow2, the output file size is 640x480 (as footage). my question is if i should prepare my 60x480 clips to either 512x512 jpgs (according to How to prepare your textures…) or to 1024x512 (as file texture node would output the jpeg pow2stretched…), or would both be fine?
for me 512x512 is obviously better because of less ram needed, but i fear too much of horizontal information of the original clip getting lost, compared to 1024x512. any experiences?
i’d love to save all jpgs in 1024x1024, as my renderer will always be on a projector with at least 1024x768 res, but i simply don’t have enough ram for that…wannabewannaram ;)
i thought already it should be fine as i’m normally using 640x480 footage to be scaled up live, and agree, it’s enough- but it’s not that easy to adapt to the idea of square footage after having it rect for so long. and as i’ve already written i’ve been suprised by the file texture behavior which has actually scaled my image to 1024x512!?
oh and hd projectors might be sooner around everywhere than i having all my footage converted…
but need to start trying anyway, will go for 512x512 first and then i’ll see. will also have a look into dds, thanks.
i’ve been suprised by the file texture behavior which has actually scaled my image to 1024x512!?
under certain circumstances filetexture will scale your texture to the next power of two in both dimensions independently. so 640 becomes 1024 and 480 becomes 512.
note that content is then stretched within that format. but if you put that texture on a quad with aspect 640:480 (ie. 4:3) your image will look correct.