So after @schlonzo now also asked for it in the other thread and I am very curious about it as well (and probably also lots of others) I would like to ask for some community effort. There is literally no documentation, tutorial or other resources about Interfaces in VL and I guess it is because their use cases can be quite diverse.
Would some of you vvvvery experienced VL programmers be willing to shed some light on this feature? What are typical use cases for using interfaces when structuring an application? Do you have some clear and precise examples when using an interface could be beneficial to a project?
Maybe some of you already have developed a routine of creating an interface for a very specific problem and would be willing to document it in here, including a little patch? I have the theory, very interesting insights and discussions could evolve out of it, complementing the thread Community Coding : Design Patterns
For interfaces it’s probably better to read C# related web page or anything about polymorphism, inheritance etc… and about SOLID Principles too probably.
But for a vvvvery basic example where it could be useful :
Suppose you have a Box and a Light, those are two different class but they could have something in common : a PositionXYZ property.
In your system there is a place where you want to move things around in 3D space. It doesn’t really matter what those things are as long as they have at least a PositionXYZ. So you create an interface called IPositionable with one property definition called PositionXYZ.
Every object you want to move around, or which you would consider IPositionable should implement the interface. Therefore your system can use their property PositionXYZ no matter their original concrete type (Box, Cone or Light…)
Now the part responsible for animating just needs to know and work with IPositionable objects, not Box or Light specifically. Then later when you will add new positionable objects such as Cone, Cylinder, Sphere etc, which all implement the interface, the animation system will not be affected because it’s still about IPositionable.
On one side you can create many IPositionable object, on the other you can imagine how they move around but those two side are not “tightly coupled”, because there is an Interface in between.
InterfaceExample.vl (15.5 KB)
With interfaces I have basically three usecases:
First is where I want a bunch of objects with different behaviours to be accessed through (literally) a common interface.
@Lecloneur gives a good example, I’ll give another one.
- For example imagine a 2D video game where your character can run over objects.
- The objects have very different behaviours when your character collides with them. Some of these give damage, some explode, some boost health, some are terrain like stairs going between levels.
- The player agent runs the collision logic and should always check if there is anything in the current player position to collide with.
- So all the different objects share an interface iCollidable and I can provide a spread of iCollidables for the player agent to check.
- iCollidable can have several operations relevant for collision checking, eg GetPosition needed for checking the position, and DoCollision actually running the collision.
- Not every pin of interface operations is actually used for every collision. For example for the boost health and damage pickups need to work on the player so there is an input pin for the player instance on DoCollision. But the pickups that affect the world (run over this button to open the door) operate on the world object and, although they still have a playerInstance input pin they don’t use it.
- Some of these objects are used by multiple systems and so have multiple interfaces, eg objects that respawn are also the interface iSpawnable. That’s tracked by a different process that runs timers to respawn the objects or respawns them all if the player respawns.
Some examples from an unfinished personal project. Pickup_Pellet and Usable_Door are both interactables.
(They are in one interface because in this case there are some objects react both to being stood on and/or used)
Second usecase is making a more specific type for casting when I would otherwise cast to Object.
- For this imagine an editing UI where I click on objects and then they open in an attributes window. Eg Photoshop/Unity/CAD software.
- There are different kinds of objects to click, vectors, images, primitives and so on.
- The selectedObject property could be type Object then it could take anything.
- But my code is more readable if I have an iSelectable interface and the selectedObject property can only take iSelectables.
- And theoretically I’m more protected from making errors and getting run-time exceptions from accidentally putting the wrong kind of object in there, but I think on most medium sized single-developer projects it’s unlikely that I mix that up.
(Next step would be to implement usecase1 for common attribute widgets, like iPositionable means a position UI appears in the attribute window and iScalable means a scale UI also appears etc)
Third usecase is wrapping common behaviour of objects in an external library where it either blocks you referencing the common type or you actually think the higher type variations could be accessed through a single interface.
I only had this once so far.
VVVV gamma doesn’t have inheritance but it’s fairly common in an external library that a bunch of similar objects are made through inheritance.
For example SFML audio has “Music” and “Sound” classes that are both players.
They inherit from SoundSource and both have all operations of SoundSource. But SoundSource is not a public class that I can create in vvvv.
Plus Sound and Music classes have unique operations that take unique input types to load data. For these I CastAs to their base types.
So I made my own interface iSFMLAudioPlayer that covered their common behaviours.
You can see this in VL.GameAudioPlayer
And last thing:
I often found a small UI bug with the interface list. Sometimes it won’t update with a recently created interface until you pick something in the list.
One more thought:
Should also consider the negative side.
- Interface is an extra conceptual abstraction that makes programming your software more complicated for no direct benefit to end users.
- Currently the UX of interfaces is quite unpolished in gamma. Once an object is being passed around as an iType it’s a step harder to peer into what’s going on inside. For example if you middle-click to open the iType.Operation you see into the interface definition not the actual running node.
- You’ve got to cast to return to objects from their iType back to their original type and that introduces a potential for run-time exceptions. (Simple fix: Use CastAs and only execute downstream code on Success).
Given the negatives for a simple case it’s always worth considering not using an interface. And instead just making one class that has a Mode boolean or index that indicate that it should behave in different ways. (Or you can use delegates but … IMHO they are more difficult to deal with than some stacked IF regions that would work the same for most usecases)
If I share a particle system and build it soley on classes/records you would have to alter my nodes to add some custom things you need.
If my particle classes inherit from IParticle and the operations are patched with type IParticle instead of Class MyParticle you could just create YourParticle inheriting from IParticle and plug them into the same system.
in a collaborative setting interfaces can act as contracts/blueprints in order to be compatible without getting in eachothers ways(patch)
a class can implement multiple interfaces. therefore you can act on different instances grouped by their interfaces.
e.g. MyFiletexture: ITexture, MyVideo: ITexture, IPlayable & MyAudio: IPlayable, ISound
you can have a List of all types mixed and in one loop show all draw all Images&Videos and another one play/pause all Videos/Audios