Imageplayer (Framebased) external sync

here we go, its as simple as it gets :)

  • create right imagesequence by starting the batchfile
  • you need a very fast SSD and 3-4 GB per second readspeed
  • you need a very fast PC (GPU and CPU)
  • you need beta and gamma to run the demo
  • a beta patch sends a framenumber to localhost
  • the gamma patch receives the data and runs the imageplayer

connect either the framecounter in gamma or the frameounter via OSC/UDP, internal framecounter should result in smooth playback while the frames indices from UDP will break the system.

My question is, how will you ever get the two numbers to align?


EDIT: it’s also very difficult to see potential stutter, when all frames are the same.

its not about the alignment of two numbers. it is about external sync indices killing the performance.

regarding same looking frames, just look at the vvvv FPS, if the error occurs, it goes down to hell and the imageplayer has very very high ticks.

oof, yea, now I see what you mean

I have been playing a bit with mainloops on the sending and receiving patch. I think it is due to framerate miss match.

If I up the receiving patch to 60fps and the sending to 45fps, I get solid 60fps
image

it seems to be the limit on my machine.

I would try to put the OSC receiver in an async region, I tried it, but i have never done it before and I don’t have time to figure out how they work. so I didn’t get that to work.
I have on other occasions experienced improved performance when tweaking mainloop framerates

1 Like

weird enough, both patches at 45fps is stable.
I would look into async

i still think you don’t get the problem. i also used a filter node and an offset to make sure that 0 1 2 3 4 5 6 from internal framecounter and the int coming from UDP, LTC…whatever is exactly the same at the moment in time, when it is fed into the imageplayer. when filtering values from whatever source, you make them depended to the internal mainloop.

björn did a test on his machine and can’t reproduce it. that introduces more tests with another machine.

what i did not mention, the problem appears on a dual quadro A6000 setup, with mosaic (8x 4k) with a quadro sync card. which might have an impact on this. i’m currently testing on a similar machine.

it’s an edge case and i’m aware that we are pushing the limits. i just want to know whats going wrong here. :)

Two cents (not sure it’s relevant for you but one might read that in a moment of distress in the future) : we had issues in the past reading DDS stacks with M2 NVME drives that had poor Random Read performance.

Looks like when reading a big file, the hard drive uses Sequential Read, but when reading multiple “small” files (like a DDS stack), it uses Random Read.

Turns out that even though being an M2 NVME (that we assumed to be supercrazyfast), this drive had super bad Random Read perfs, which lead to it being maxed out 100% of the time (you could see the graph caping at 100% constantly in the performance monitor), resulting in our patch randomly dropping to 2/3 FPS.

Changing the hard drive to one that had good Random Read perfs magically solved the issue.

Related article on HowToGeek

actually, you are up to something in terms of a solution. i tried 30 sending and 60 fps receiving, that seems to work quite well. i tried interpolating the frame up to 60fps again, that worked.

Thanks for plan B !

regarding the original problem, it is still not clear why this happens in the first place.
just added a video, please ignore the queue recording in the video. there was a glitch. the video is just for demonstrating purposes.

I think it might be due the two instances need to be perfectly synchronised if running at the same framerate, to avoid receiving two integers per frame. at least that is what I think.
It is also why I think it might work if you receive the integers in an async region

the vvvv UI tells me, the values are perfectly synchronized, when looking at the queue output. under the hood, things might look different and vvvv might choked on the fast UDP input. but then i tried LTC, which was just 30fps by default and i experienced the same behaviour.

i still suspect the execution order of vvvv, which we only have limited control over via patching.

Without having wrapped my head around your setup…what strikes me in your screenshots is that you’re not using a FramePlayer node. Have you read this? Video Synchronization | vvvv gamma documentation

I wonder if what you’re missing is a custom implementation of the FramePlayer thats using your custom sync mechanism.

I have tried the built in sync of the frameplayer briefly. it was running out of sync immediately and readapting to server was constant. That meant also huge spikes in the mainloop. really thought it is just broken

heh, that would have been interesting info, because then maybe it just needs a fix. or maybe it is not broken but just needs specific handling.

it seems quite straightforward in terms of setup. just tried again with the default values and it behaves the same. it is very unrealistic how fast the frame gap grows between server and client. i would be very happy to use it if worked.

would you suggest different settings? still think the offset growing so fast is weird

Edit: Sorry my mistake about the offset. The server was not up to 60fps.When the server runs smooth 60, the offset does not grow so fast. See it now at a constant value in the positive, that is also weird. And adapts trigger still pretty frequently and spike big in the mainloop

My guess: one of the frames simply takes too long. Image in in the 1. render frame the server tells us to load frame 1. With a preload count of two we will start loading frames 1, 2 and 3. But (and that’s maybe not expected here) we will also wait/block until frame 1 is fully loaded. Depending on the wait time there’re now two scenarios how the game continues:
a) It was short enough so that in the 2. render frame the server will ask for frame 2 or 3. Since we had both of them in our preload list already there’s a good chance we will fetch the reqested frame quick enough and start loading frame 4 (or 5).
b) It took ages to grab frame 1, and since the server doesn’t wait on the client(s) it already requests a frame much higher than we thought. We’ll not have that frame in our preload list, and therefor there’s a high chance we end up in the same situation as in the 1. render frame.

That the ImagePlayer waits on the result it should display was a change last year in March. Can’t remember exactly what the test setup was, but for sure it wasn’t done just for fun. Here is the relevant commit message:

Adds blocking method to TextureReader (Async) which in turn is used by image player to wait for the display frame

  • No white flashes, because the frame we want to see will always be there - we wait on it
  • Better resource utilization - the default pre-load count of 2 now leads to correct results
  • A pre-load count of zero means we always wait on the frame, we see exactly how long it takes to load one video frame. With a count of one we already do most of the work async, but might still see some higher numbers whenever the video frame synchronizes with the render frame.
  • Increases the read buffer size from 128 KB to 8 MB (vs 2 MB in vvvv beta) thereby reducing overall disk usage

Now this part could easily be undone on your end by making a copy of the node, name it MyImagePlayer and modify it to this:
image

I’d image doing so will lead to white flashes in your playback, but at least it gives your client a chance to catch up. The white flashes could probably be compensated by displaying the last frame, but still, it will lead to visual glitch.

If all of that helps your case we should probably add some config option to the node whether or not it should wait on the display frame or not.

Getting rid of the white flashes is on us last year. Would be pretty ironic if this particular change we were happy about led to this sync issue one year later. Anyway very interesting and we will try that for sure. Thanks for the detailed answer.

i tried this way, this results obviously in white flashes and surpringly in even worse performance. the FPS is down to even lower numbers than before.

we call it a day for now and will sync from internal framecounter to anything else. maybe we find the time in berlin to look at this together detail.