Bug with adaptive math nodes

this is a super weird one:

when connecting an Integer32 to a “-” (math node), the latter will turn into a “-” (Primitive.Integer32) node:

callmenames-2023-06-09-Application_2023.06.09-16.07.47

however, when the source is a Length (System.IHasMemory) node (which outputs an Integer32), the “-” node will assume Float32 for some reason:

callmenames-2023-06-09-Application_2023.06.09-16.14.24

AdaptiveBug.vl (8.5 KB)

Just a guess, but which Length node is that exactly? Was the Length node adaptive before? Because the Adaptive definition of Lenght is T → Float32. Try connecting a Length node by selecting it directly from its category…

okay. I once again got caught up in the “Count” vs. “Length” confusion. of course the proper node to use here would be “Count (Spread)”.

still, what got me so confused was the situation that the “Length [Math]” node

  • is adaptive
  • outputs the proper spread count as an Integer32 (when connected to a spread)
  • when this output is connected to another adaptive Math node (like -/+/…), these nodes will not be integers but floats.

so something is quite off here…

EDIT: i changed the topic name to something more fitting

Interesting one. Looks to me like the collision of two features lead to this situation. The first is, that the adaptive lookup is written in a rather relaxed way, for example in this case the adaptive definition says the output must be a float. The selected implementation outputs an integer. If you think of the implementation as a node inside the adaptive hull connected to the hull’s output, this is ok since we can connect an integer to a float.
Now the other feature comes from the way default values are handled. Say you have a node with a default value on one of its inputs. You might expect that the default value also gets fed when selected by the adaptive mechanism. Currently this is achieved by replacing the adaptive node with its implementation when it turns out that after the type unification all types on it are closed.
It looks like the way this last step is implemented should get re-evaluated.

Let me try to write it down once more in your specific case: the output of the Length node is Integer because it gets replaced with the chosen implementation (2nd feature I wrote above). But this happens after unification, so when the system looks for an implementation of the downstream minus node it will calculate with float and not integer. So swapping out the node after the unification is the problem maker here, leading up to these inconsistencies.

We’ll need to discuss this internally. Thanks for bringing it up!

1 Like