After all I will try to summarize the findings from our session in Berlin.
To briefly outline the problem again: On our last project, the patch graph consumed 10ms (10k ticks) in idle state, so literally rendering and computing nothing but itself, reducing the available netto computation time for content to about 5-8 ms.
The question was and is, how this can be improved, without creating a hell of evaluates and s+h’s etc, making the patch hard to read and work on.
Spoiler: The sad news is, there isn’t an easy way out. But VL can help.
I have to go little bit into detail here. If you look at our root patch:
basically everything above and around the line at the top are just framework patches. Reading settings, switching scenes, computing timelines, computing and verifying time etc. Nothing fancy, but necessary. Also no big slicecounts. Mostly flat logic. And still counting up for 2.5k ticks.
What to do about that?
Usually you would say, that you write plugins to get this kind of computation cheaper. Yet, we already have a lot of custom plugins. Look at these boring patches, they don’t do anything special:
What makes these patches still slow?
Normally we don’t write monolithic plugins, that serve a special case, but we write generalized nodesets, so we can use them in a variety of ways. But in order to use them in a specific scenario, we always need certain helper nodes from v4, like"+" “sift”, “select”, “switch”, “getslice”, whatever, in order to feed the plugins the specific data for the specific usecases. So we have to use a lot of v4 nodes extra, to make the plugins usable.
This incurs two kinds of overheads.
One is the overhead each v4 node brings with it, that it needs to compute every frame (even if it only has to check, wether it has to compute), and also that even single values are internally computed as a spread. So every v4 node has a cost. And the other is even more expensive, it’s the overhead from the plugin interface. So the more you break down functionality of your plugins into nodes of smaller, reusable functions, the more often you have to go in and out of pluginspace. This in itself is quite a cost, as I learned and wasn’t so aware of.
This is where VL comes in, and this is also my first big learning from that day in Berlin.
VL is not just another way of writing plugins in the sense of small, abstracted functions. It should allow you (and it does as I know now), to not only write plugins as you did in c# - with the same overhead in continuously crossing in and out of the pluginspace etc - but it makes it very easy to move also the special cases into the plugin space. Just as you do in v4 for this and that specific scenario, you can patch away in VL, and let the VL patch, i.e. the compiled code, do also all the special work, that is needed to - in this case - compute camera switching state, to do scene switching, clock verification etc., without leaving the plugin space. Return only the finished value to v4.
This is something essentially new for me, who is not so super familiar with written code. Plus, for more complex stuff, you don’t need an external IDE, but you can continuously work on and compile your code. Only this ‘on the fly compilation’ makes it really possible to get closer to your specific usecases with compiled code.
To put this into figures. After holiday I went onto porting first patches and functionality into VL.
Our ‘Clock’ patch, with an average of 250 ticks is now down to 40.
A specialized patch for receiving and and parsing tracking data is down to 40, too, from previously over 300.
And I can now see, how this pattern extends to many of the other tasks we have in our projects.
This of course doesn’t apply to all problems, especially not the large-spread-heavy number crunching, where the differences get smaller. But to many of the framework problems of large patches, which was the original question of this thread.
Further up in this thread, I said, a feature update for v4 is necessary, because VL is not yet ready to take over. Understanding VL more, I must say that that is only partly true. VL isn’t ready to do design work. The problems regarding this have been highlighted and are on the agenda. But in other ways it is very ready to take over. And it is also (quite) usable, after two weeks of occasionally investing time in it. It’s still not perfect, and some things are cumbersome, and it also crashes from time to time. But it’s mature enough to open the door to take it into production more and more.
That’s it from me on the VL side of things. I’m hooked now. I can see how it integrates with v4, and how it is superior to it.
But why can’t everybody see that? More on that further down, in an observation about communicating VL.
But before that - what about the other issues of this thread? How can v4 be improved to perform better?
One thing we found is that v4 is actually bad at identifying static branches of the patch graph.
If you look at this patch:
You obviously see the blend, the rasterizer, the transform, the pillow etc. - why do they consume ticks at all? They don’t change. And that should be able to detect from looking at the patch graph alone, that these nodes can’t change, and thus also don’t need to evaluate each frame.
So I hope for better evaluations strategies based on a smarter graph-parsing. This can be implemented and save a lot of cpu.
The other ideas … about the easy way out. About just sending parts of the graph to another thread. About a multithreaded v4 …
What I’ve learned about these ideas leads to the really important question of communication for the development of VL and v4.
After a really long day (2 a.m.), looking at our project, trying to understand VL and taking first steps, Sebastian pulled me over to his desk, saying “hey, I’ve got to show you something”. And what I saw was a prototype of v4 running on multiple evaluator processes. I was baffled.
After that, Elias took over and showed me a v4, where the interface was running in a separate thread. Rendering unimpressed by UI interactions.
We talked for a very long time. And for both of the prototypes there are very good reasons, really severe problems, why they are not a reality yet; and probably also never will be (one more than the other). I won’t take too much away here; I think the devs should explain the details at some point.
But what really struck me is, that behind a facade of … silence? ignorance? of the v4 group there is actually something going on. That the proposals and requirements discussed in this forum are taken seriously, and are being investigated. This is not visible from the ‘outside’, from the user base. And I think this is a crucial thing to understand for the v4 community.
I would identify an elitist split between the v4 users and the devs, which must have happened slowly over the last years.
I remember times where we marveled at the devs, culminating in celebrated keynode presentations on new features and techniques, with a feeling that they were ahead of the time, knowing what would be the right steps to take for the future.
With the focus on the rewriting of v4, known as VL, this confidence faded. Other contributions made the day for the v4 users. And an with an unspoken disappoinment about this emptiness also faded the trust, that the devs know what to do. The trust that the right decisions would be made for ‘us’.
The visit to Berlin changed this perspective for me, it showed me that the spirit is still there. The development is smart and necessary. It just has a communication problem. How to do that better, would be a discussion for a separate thread, I can’t cover that here now in detail.
One thing would be to disclose experiments, like the ones about multithreaded v4, and let the community participate in your argument, in the reasoning why to take this path and not the other one. This strengthens confidence.
The other might be to pick people up at what they know. To show that you can do the same in VL, rather than showin how VL can solve problems that I didn’t have before I knew it.
And last - even though I’m confident again about the development and the steps taken by the v4 group, I still think it would be necessary to broaden the field of development with new people, new forces that focus on specific user requirements and needs, like timeline, render pipeline, asset handling etc. That can happen in parallel, and can probably also be reused in a VL world.
So long. Thanks for reading.
Eno