Editing Framework and DX11: Cursor issue

I’ve recently found some nice usecases where VVVVs EditingFramework was really a big help to get a better user experience when developing applications with VVVV.

However, I found some inconsistencies regarding the DX9 and DX11 version that can be solved with workarounds but having a clean out of the box would be highly appreciated. Here’s the thing:

When using the Cursor DX11, it appears not where it should be over the window but rather at the normalized (-1 to 1) position. This makes it really akward to use when the Renderer has non-square proportions. (see attached patch)
cursor_differences_EX9-DX11.v4p (9.1 KB)

Looking inside the node it appears that the Coordinates from the Mouse node behave differently with EX9 and DX11 Renderers (one outputs normalized coordinates, the other one does not).

Of course I could make my own version of the Mouse node incorporating an ApplyTransform but I’d rather ask for a clean (consistent) solution that would work for everybody…

Am I overseeing something here?

hey motzi,

thanks for bringing this up again. The comments in the patch suggest that you favor the current ex9 behavior. Note, that in earlier vvvversions ex9 behaved quite the same as dx11 still does.
Since you normally work with non-square renderers having the mouse reporting values in a square coordinate system indeed feels awkward and leads to issues like this. There are for sure different hacks to patch around this, but of course in the best case scenario the user just doesn’t have to.
That’s the reason why we put some work into getting this right: https://vvvv.org/blog/aspect-ratio-and-projection-space
Since beta36 now we have this behavior where ex9 (as a reference implementation that nobody really uses anymore) demos how this could work out and how things could get simplified.
So now that you can compare behaviors side by side it would be interesting to know if we can get an agreement in the community if this issue should be addressed in the dx11 implementation. I once started to code into the direction of having this in dx11, for a proof of concept, but didn’t clean that up properly yet, and didn’t make a proper pull request, because it was unclear if this would get accepted in the end. https://github.com/mrvux/dx11-vvvv/issues/332
So i guess the roadmap would be:

  • get an understanding of what the community wants. and depending on the outcome
  • get understanding how this could get integrated in a proper way into dx11 (the node set is a bit different with aspectratio nodes …), discuss in detail with vux and other pro users
  • i could then try to somehow port my insights of the dx9 implementation and come up with a proper pull request

So first of all we’d need to collect opinions here. What is your take on it?

This may somehow be reflecting the proposal I applied with for the LINK Summercamp.
How the editing framework and other “thirdparty extensions” could be of actual help when patching, being integrated more naturally into the software suite without the user having to setup his own framework.
Of course, sometimes this flexibility is quite cool, but you still have to wire everything up to a timeliner, come up with your own 3d scene graph management, point/line editors, binsizing everything and make sure the usability is in place … and then having to disable everything for production runtime.

So there’s quite a lot of overhead the users still have to deal with, as opposed to other softwares offering more seamlessly integrated tools - while vvvv only ships with the inspektor which you use to inspect elements and work on your patch.
This is quite cumbersome, as even a sophisticated framework may turn out being very specific to a single project, so you can’t just reuse it for anything else in the future without customizing it again and again and again …

Maybe it’s a little offtopic, but everyone’s welcome to join the camp to talk about this.

3 Likes

so this probably goes @vux with the question: do you see any chance of dx11 adopting this new dx9 way? if not by default, then optional? e.g. via a separate/new Renderer (DX11 AutoAspect) that behaves differently or simply an option input on the existing renderers?

@readme:
yes I think it is a bit offtopic so i moved it to this thread: Thirdparty extensions

let’s keep this a thread about aspect ratio related issues

i’d like to bump this up one more time as i find this very important.

tl;dr
from my point of view consistency between the rendering frameworks (EX9 and DX11) should stay as close as possible and my personal preference prefers the newly proposed handling of auto-aspect ratio in EX9. i’d be very happy if we could see this in DX11 too.

here’s a longer explanation why:
i’ve been teaching VVVV for many years now and also love using it to explain how 3D rendering engines actually work inside. of course, the regarding nodes in VVVV are also abstractions from what really happens inside. but to get an idea about the inner workings of a 3D pipeline VVVV is a great tool as it does not distracted beginners like textual programming languages that you would usually need to implement 3D applications do (i’m talking about beginners here that don’t have years of programming experience and might not be as fluent in c++/c#/whatever). imho the VVVV nodes find a good balance between staying close to the actual concepts and not making it too hard to use or to understand (which does not mean that beginners have to learn a whole bunch of new stuff).

at some point during my courses at some point different possibilities of interaction with patches, including classic stuff like mouse/touch/… come up. surely it is not hard to explain why input coordinates are normalized in the -1/1 range and therefore have to be transformed according to the renderers aspect ratio. i usually would argue that this is actually what you actually need to do in any 3D engine because that’s just the way stuff works.
on the other hand this situation introduces some additional challenges that you have to take care of and keep you away from what you actually want to achieve (building an application using basic input coordinates that behave as you would expect).
so to me it boils down to a question of usability vs. staying as close to “the real thing” as possible. for this issue i’d vote for usability. i’d argue that taking the steps towards auto-aspectratio takes some complexity without hurting anyone. as a comparison i’m thinking of the (wonderful) DX11 texture filter nodes that actually do a renderpass inside without the necessity to have to create a renderer node (like it was with EX9 texture effects - i guess nobody misses that). so here this abstraction hurt no one.

though, from an implementation point of view i have no idea what other stuff in the DX11 framework would be influenced and had to be refactored by these changes. there might be many arguments for keeping it the old way which is i’d be interested in opinion from others as well.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.