a post from readme:
This is a tough one.
First of all, I think that we still have the chance to improve the editing framework in a way that it consumes less power as it is mainly about acting only in very rare moments in time. This, of course, is something we just need to do anyway. The next step I would see is to make it easier to set up these tools without having to worry about keyboard and mouse devices for example.
But your main point is that you’re proposing two modes: edit and run. And tools that work with an always available cast system, scene graph, timeline, automata. In the best case, the system would also allow to plug in third-party extensions, that add to the set of always available data sets, that they can edit in edit mode and just access in run mode.
I believe there is a chance of coming up with something like that.
If done the wrong way, however, we’d break the openness that we know of and value so much (having the option to start with a blank patch).
We also need to make sure that we don’t break modularity by having just one static scene graph, one main automata (…). So there will be a set of scene graphs, automatas (…), that are always there and get managed by the respective editors, which we then would need to refer to from within the patch (to express which scene graph we are interested in). So in that scenario, we’d just have a point cloud accessing node in our patch, but the actual editor(s) would be instantiated at some other place.
At first, setting up something like this could get a bit trickier, but it’s probably worth trying.
To integrate things naturally I would say we shouldn’t invent everything from scratch, but try this idea:
- user has to root patches:
- one for editing the project. it includes the globally available editors, they might come with own windows, or they just render themselves by being layers into renderers that are available in both modes. No matter what flavor: these editors only run when starting the project via this root.
- one for running the project. the root comes with counterparts of the respective tools, that access the same persistent data, load them on startup and provide in the same way as the editor counterparts did
- other patches in the system ask for specific data sets. Each tool comes with special nodes for that. These could look like: choose scene with a scene node and an enum, choose a point cloud node taking a scene and a string. Or just one node that does both jobs. I guess each tool would do it in a way that fits best.
Having a setup like this would also allow one editor to edit different 3d positions of different elements at the same time (point cloud here, beziers there). All of these tools would come with the idea that you can freely add casts (like bezier curves, automata states, timeline tracks…) via the UI of the editors, but access the data via some key wherever you need it.
What do you think? Would that specification be a solution? Or would the proposed setup make you scream: Thank you! now it’s even more complicated!
We kind of already have edit and run mode, at least for installations, I run almost all installations I make in /shutup mode and as such has to restart my patch to edit or see how it runs in shutup mode (some times stuff works a bit different in the two modes)
So making custom UIs for different tasks would be cool.
talking vl editors and types, what about inheritance? i know, it’s not something you can just implement quickly implement, no worries. and i’ve heard @gregsn mentioning a couple of times, that encapsuling the basetype does the trick.
not sure about the specifics @readme is referring to when talking about reuse and customizing. my issues when extending for example editing frameworkhave been that i had to decide between
- changing/adding to the existing types and methods, which always resulted in touching the ‘core’, so upgrading to another version and the changes are gone.
- encapsuling the core nodes: results in encapsuling literally any node anew since the types don’t match up. (think of bezierstate of type Tuple<bezierknot,color> or something alike)
regarding the two root patches approach:
i guess, most people already do something similar, trying to evaluate the gui part only if necessary. a more rigorous decoupling is often not feasible for the workflow: it’s really rare, that development of vvvv projects can be cleanly split into application engineering vs content creation. thus one would end up switching between the two root patches all the time. and with content-heavy productions this means waiting someting like 10 minutes every time until vvvv, all the packs and all the assets have (re)loaded.
i also think that reworking these frameworks with respect to extensibility and cleaner separation between model/state, control (split,read, simple write) and view (fully userfriendly gui) can help a lot already.
I see… As you mentioning command line argument /shutup… Maybe the idea above can be simplified quite a bit.
So about: Let’s say you have a tool Foo, that comes with two modes edit and run. Let’s say it is listening to command line arguments and if it finds /shutup or /FooRunOnly it just loads the user data from disk and makes it available to consumer nodes. Only if it doesn’t find such a command line parameter it starts up in edit mode. As a result you wouldn’t need different roots.
So basically you would place one Foo node in your main patch. It comes with a UI and a configuration of where to store the to be edited data. On startup (no matter what mode) it loads the data and makes it available to other nodes in your subpatches, nodes that just query into the dataset of tool Foo. The main Foo node also comes with pins “Show UI”, “Hide UI”, which are just bang boolean pins. Of course you also have a close button on your window. Anyways: this way you don’t need to startup into a different mode. It’s all about the question: is the UI open or not. But even if it open, it shouldn’t consume your CPU if you don’t actually use it in an editing kind of way.
Applying this to the editing framework:
- one probably could already patch that (even in pure vvvv). But I can see that you’d not want to do it yourself but have it already available. And also you’d want to have a slightly more performant set of nodes to built upon, if you’d go for picking that task. But still the central idea of having one renderer where you have a small UI that allows you to manage some point clouds, give them a name, edit them and store them in one file could just be a patch that makes the data available via one sender. E.g. it could just send one dictionary under a fixed name “PointClouds”, and somewhere in the subpatches you receive and use the dictionary to pick up whatever you need.
So, yes, there is room for improvement, but I can see some light at the end of the tunnel. Also, if we get the aspect ratio issue out of the way I’d be thrilled, or at least even more positive about editing framework and easy to use UI in DX11.
Concerning inheritance in VL… the idea currently still is that interfaces are the next things we want to get right. The abstraction they provide is the key concept. A concept that might help us in the editing framework. We need to have review session there. Inheriting behavior in VL from a base type is not something we’re planning to do in the near future. Anyways, I made an issue to have a review session concerning of how to use or extend the editing framework.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.