How to make adaptive process nodes work

I want to implement an adaptive node. As far as I can tell I followed the instructions in the gray book but the nodes don’t get resolved. Only difference I know of, the gray book only talks about operations I need/used processes though. Are process nodes not supported?

Tested with 5.0 stable.

Adaptive

AdaptiveTest.vl (18.9 KB)

Ok seems to be the case. Works when using operations.

AdaptiveOperations.vl (14.0 KB)

Adaptive process are not supported, I recently ran into the same issue.
See @gregsn s extensive answer here

1 Like

@gregsn

Is it the way it is, just because it was decided so or is there a technical reason for it?


Let’s say I want to build something like a “TextureFX chain” in Fuse.
I am pretty new to Fuse but afaict once data enters the Fuse world it’s passed arround as ShaderNode<T>.

When for example a texture gets sampled the result is ShaderNode<Vector4>.
Now one can pass around that result and operate on it, let’s say invert the colors, so far so good.
But some (a lot) of effects like a Blur need to (multi)sample from a texture, so they won’t work with a ShaderNode input.

Thus a way to be able to build a chain of such effects would be to have them all have a texture input from which they sample to ShaderNode<Vector4>, operate on that value and then ouput it again as texture by using RenderTexture [Fuse]. But my guess is, going back and forth like this is going to have a performance impact.

Another approach, a implementation for every input / output type combination
so the user can still make “arbitrary” chains:

In case of an Invert (which doesn’t need to sample):

  • ShaderNode<Vector4>ShaderNode<Vector4>
  • TextureTexture
  • ShaderNode<Vector4>Texture
  • TextureShaderNode<Vector4>

In case of a Blur (which needs to sample from a texture):

  • TextureTexture
  • TextureShaderNode<Vector4>

Of course this will clutter the node browser and also brings some mental overhead (which version do I need again now?). Hence the idea to use adaptive nodes. But nearly all the Fuse nodes I used during my preliminary experiments were process nodes, i.e. the thing won’t work with operations.

So if I understood the thread in the chat correctly TypeSwitch would be the way to go for different input types. But what about having different types as output?

hey bjoern,

yes, it hasn’t been an arbitrary decision, but technically motivated.
It’s been quite challenging technically to design the adaptive node system, and later on, make it possible to even precompile generic adaptive patches into dlls. We were basically ahead of C# with regard to this way of expression. With the 5.0 release, we went one step further and made it possible to instantiate generic types with reflection. All these steps are challenging to realize. But it’s very nice to see that we manage to improve the adaptive system further and further. So, I can imagine that we’ll have adaptive process nodes at some point. Actually, there were already some ideas popping up lately.

On the way, there are also new design patterns popping up on how to use the latest feature: Generic interfaces - #7 by Elias
This DynamicModifier node has a very adaptive feel to it already, even though we don’t have adaptive process nodes. The trick is to abstract over the construction of the modifier via the adaptive system. There is CreateModifier (Adaptive)<TModel, T1, T2> and two different implementations for it…

This might be interesting to you, or it might just trick you into some path that doesn’t really work out for you. I just wanted to point to the somehow related work, so that you can evaluate the options as long we don’t have adaptive process nodes.

But actually here is a question: Imagine you have adaptive process nodes and you have a chain
Blur → Invert → Blur
The system would ideally figure out that on the second link, we have only Texture as an option. So only two Invert candidates left: the ones that output a Texture. But how would such a system decide between those two? We would need to tell the system to always favor ShaderNode<Vector4> over Texture. Just for the record: this is yet another feature request. Up to now, the adaptive system would just not be able to decide. You as a user would need to put type annotations in between and suddenly it’s would be getting more complicated when compared to not using the adaptive system at all, leaving it to the user to pick the right version.

I am not sure how would put this feature request into words. But certainly, there is some missing part to make your idea work.

Using TypeSwitch at first sounds like a more realistic option. First of all: You know the candidates. And then also: a TypeSwitch lets you prioritize one option over the other. Using a type switch normally means that you use object and decide at runtime what to do depending on the actual type of the object flowing at runtime. So in the user patch, you’d have object as the compile time datatype in order to communicate with the downstream node.

However, as you pointed out, you also need to be able to communicate upstream. This is where typically the adaptive system is better. The question is if there is a way to combine those systems.

Just thinking out loud: So we’d have object, Texture and ShaderNode<Vector4> as potential candidates at compile time.

Blur -(object)–> Invert -(Texture)–> Blur

Blur could be adaptive: Texture -> T with implementations
Texture → Texture,
Texture → ShaderNode<Vector4>
Texture → object
Its Input is always Texture. You want to communicate this upstream.

Invert could be adaptive object -> T with implementations
object-> Texture,
object-> ShaderNode<Vector4>
object-> object
The first candidate would get picked. As it needs to output Texture.

This should determine the overload for the upstream Blur (Texture → object). The implementation for this one would make use of ShaderNode<Vector4>, since this doesn’t require an extra render pass and the downstream doesn’t seem to care.

Not completely sure, but it sounds like the combination of

  • compile-time type inference + adaptive node picking in order to communicate upstream and
  • the type switch in order to select the right approach at runtime depending on the input
    could somehow work out.

And maybe the “adaptive process nodes” that we used here could be somehow realized by what I mentioned above, but it sounds like a lot of work though. No matter how you put it and no matter which feature we have or not have it’s going to be tough.

So maybe it’s doable right now. But it would be a bit easier if there would be

  • adaptive process nodes
  • type priorities when in doubt

Let’s digest this.


Last and not least: you of course could also invent your own type which flows over all the links and allows the downstream nodes to talk to the nodes upstream in order to tell them what is acceptable or not.

This would be a classic approach where the library doesn’t heavily depend on the language features but comes up with the (runtime) logic itself.

The only annoyance then typically is the often necessary conversion nodes to enter this world and to leave it again.

4 Likes

Thanks a lot.

1 Like

proof of concept:
FXChain.vl (76.2 KB)
would be great to see some more meaningful values in the tooltips. Can someone come up with a better implementation?

2 Likes

Super thanks

1 Like

Hey @gregsn thanks again.
One more question.
The way it is now the operations defined in Blur and Invert somewhat “pollute” the node browser.

I tried setting them to Internal.

image

But unfortunately then they are no longer picked up by the system.

image

Any way around that?

FXChain.7z (13.4 KB)

1 Like

That’s something, which is a bit ugly, true.
So here is another way of doing it as long as adaptive implementations can not be hidden:

FXChain.zip (18.6 KB)

1 Like

One more :)

Let’s say I want to use my own data type instead of object, let’s call it TFX. It contains an object (Texure or ShaderNode<4>) and some addtional data, for example the “original” texture size (since this info gets lost after being sampled to ShaderNode<Vector4>). I tried to modify your example but I just can’t wrap my head around it…

FXChain.7z (21.6 KB)

In order to make bottom-up type inference work I had to go for a covariant interface, which is only possible to be declared via C#.
This way a consuming node can “talk upstream” and tell that it is ok with any type by using ITFX<object>, which is assignment compatible with the upstream TFX-providing node, which may be of a more concrete type ITFX<ShaderNode<Vector4>>.

FXChain_CoVariantInterface.zip (82.8 KB)

But there are so many topics that get touched on here. I’m not sure if it helps to go on like this. And go into all the details. It blurs the topic of this forum thread. For sure the whole topic can be tackled from a top-down as well. But also bottom-up, with one thread per topic. Would it make sense to make this discussion more structured and more helpful for others by opening new threads?

3 Likes

Thanks a bunch again and sorry for having kept pestering you.
Imho it’s nice to be able to read through one thread and follow along like that. At least when dealing with (what I’d consider) such an advanced topic.
Wouldn’t even have known how to name the follow up thread…

Maybe we can change the thread title to:
“Gregsn explains how to build a TextureFX chain in fuse”
That way it becomes kinda like a step by step tutorial.

2 Likes

Could such explanations/examples be condensed in a blog post, a Gray Book page or a help patch? Kinda miss the in depth technical/behind the curtains articles :)

You weren’t pestering me. I just felt it gets just a little bit out of hand. But maybe that’s just my perspective. Probably I just want to make you take everything with a grain of salt.

Writing up solutions or patching into a certain very concrete idea for sure can look as if this would be the only or the suggested way to go. While I’d like to argue: I am not really certain about this. I just tried some idea and wanted to show you some tricks that you can use and combine. Probably it would be more didactic to isolate every trick so it becomes more clear which other solutions exist.

Certainly, at some point, you want to compare the different approaches. And compare the pros and cons of the one or other solution.

Can I ask for a favor? For me to be able to develop a stance on this from a top-down perspective, could someone of you try to implement all 4 inverts and the 2 blurs in a serious attempt to make this functional and maybe even elegant? Make it beautiful, so that we could imagine having those 2 as a blueprint for every other effect. You for sure don’t want to rethink 4 different implementations for each of your effects.

That would be great in order to be able to evaluate further. Thanks a lot!

@sebescudie probably a good idea. The tricky part is to isolate a technical topic that is also beautiful and fun to dig into. Open to suggestions. But it looks like there are more and more people getting into building bigger systems. So it looks like those topics are popping up more often nowadays.

1 Like

¡Warning: Rambling ahead!

Maybe it’s time to add a little (= really condensed) backstory. @lecloneur and me are currently looking into adding the iirc100+ TextureFX he made to VL.Addons, since it doesn’t seem likely that they will all make it into VL.Stride in a timely fashion. We’ve been discussing the how for about 10 days as of now and it’s not really a straight forward process.

Initially we just wanted to add them as “normal” TextureFX. While exploring this we encountered some issues, for example [1], [2].
Some needed ShaderFX were also missing and after some more debating we decided to explore patching everything in Fuse instead.
There we encountered the “issue” that we had to go back and forth between Texture and ShaderNode. That’s when I thought about the adaptive approach which I couldn’t get to work.
You were so kind to supply a solution for that.

Doodling with that the “polluted node browser” came up. Which again you solved for us. Finally there?

The next obstacle we hadn’t thought of before: when going to ShaderNode some info is lost (TextureSize, Format and maybe Mips). So either every node that can output a texture needs some inputs to be able to (re)set that info or we add some kind of “meta data” ( ITFX<object>).
Once again we couldn’t figure it out by ourselves. @gregsn to the rescue.

Now we are here and don’t know how to best deal with other input parameters of type ShaderNode . Meaning that if we want to connect a texture, we have to Sample before (which is what ValueMap, ColorMap is for in Stride). Or we use our ITFX<object> but then can’t have any Fuse operators inbetween.
@lecloneur proposed an approach earlier today but I am not quite sure if he has already implemented something.

Long story short I don’t know yet were this is going.
But I’ll try to implement the BuildNode and CoVariantInterface approach for Blur and Invert soonish.

3 Likes

You forgot the rambling. But ok.

1 Like