I’m trying to output a sequence of png files using the stride texturewriter node being triggered by a frame counter, but I keep running into a massive FPS drop once I start writing the images.Is there any way to work around this?
I’ve tried using the async nodes around different parts of my patch, but it’ll either do nothing or make things slow to a crawl. I’ve tried tweaking the mainloop, but that has little to no effect. The slow down happens no matter what my input texture source is, from the videoplayer using Scenetexture to the kinect2 pointcloud node using SkiaTexture. I’ve tried writing the files to my internal drive and my external drive over USB 3.1, both with onboard graphics and my eGPU, but everything has a similar result.
Here is an example of how I’ve been rigging it, any tips or pointers on how to improve the efficiency would be greatly appreciated! Sequential Texture Writer Test.vl (18.2 KB)
Thanks for the tip/reference! Definitely getting there, now at least I can get it to output/write only the frames it actually counts, I can tell it’s skipping about 13 frames when the main loop is set to 30fps from the produced filenames. In any case, it’s helping me understand async a bit better, hopefully I’ll have time to dig deeper in coming weeks.
Mainly, when working with compositing and editing videos in vvvv, I’m used to converting everything to sequential images and using the player dx11 texture node; this was significantly less of a cpu hit than using the video player nodes, so I could have many videos/image sequences without reaching a bottleneck and no noticeable latency, I even had a super stable VJ crossfade/scratcher patch going a few years back using my DJ2GO, and it always worked without a hitch! In addition, the capture process was a lot more static, if I’m using a screen recorder and there’s a stutter, it’s in the final print and I have to run the project all over again, but if I have something outputting every frame from the buffer in a static manner, I don’t have to worry about smoothness or dropping frames.
Also, I do like the idea of having the maximum quality at every stage, because I know by the end it will go through a few passes of compression (more so than png); at the very end I will usually use OBS, especially when it comes to sound reactive as I can’t sync audio playback with the writer, but I don’t like to start there with OBS encoding… maybe that is an old way of thinking as h.264 can produce really good quality as is.
But yeah, for now, I did settle on OBS, stride makes it super easy as it recognizes it as it’s own independent window (no more full screening on a second monitor!), I did have to adjust the color space, as YUV420 was messing with the saturation from VL Gamma’s renderer and scenewindows, that is still a bit of a mystery I’m working through, but I must say I am pleased with the results. Plus, Gamma’s video playback seems much more stable (although it’s been a while since I’ve ventured there with vvvv), and it can at least handle a few videos and some light compositing without issue (well, maybe the occasional phantom crash, but par for the course in most editing software).
For now, I’m going to say that it’s not priority for me, but in the future if I can somehow figure away to replicate a good NRT Writer and Player Dx11 texture node in VL Gamma, that would be just fine by me!