VVVV Particles Library

What about join our vvvvser skills and build a proper particle plugins/modules library to get an advanced new way for manage particles in vvvv?

i have a dream… ahahah

no joke, i think we could make giant steps in making particle Systems easier to patch in vvvv.

Here i’ll explane my ideas of how this library could be. please fell free to join the discussion! any idea is welcomed.

If you are interested in this “project”, i highly recommend to read these previous post where this idea comes from:


As you could see looking into ParticlesGPU library i’m interested in using GPU for particle systems;
the great point of a particle system in GPU is that you can have huge amount of particles.
the bad point of GPU is that is not so flexible and versatile as the CPU.
this means you will not be easily able to integrate GPU particle system in any vvvv scenario and use it for other purposes than rendering particles on the screen.
instead, using CPU particle system you have a spreads of data and you can use it in any way, controlling any other thing in your patch (wethever you can do with a spread of numbers).
that’s why both CPU particles and GPU particles are interesting.

What to achieve:

  • a complex particle simulation is the result of many different functions working together and generating a unique complex behaviour for each particle. The main idea is to build specific plugins/modules that can work together and can be easily combined in order to obtain complex behaviours.
  • In order to allow these plugins to work together, we need to decide a system of rules, a PROTOCOL, to keep in mind while writing/patching plugins/modules in order to be sure the new plugin/module will be perfectly integrated with the other features.

Main Ideas:

Shared sources

In order to allow multiple behaviour plugins to work together they need to have access to all the data needed. This means that the all the “feedback” information (i mean the data from the last frame. anybody can tell me a proper term for these data?) in the patch, using holy framedelay nodes to retrieve data of last frame evaluation. an example:
here the initial current position of particles comes from that framedelay on top and all the plugins on bottom can read these data.
In a complex system we could have more than xyz position data for each particle. there could be also XYZ Velocity, Color, LifeTime, XYZ Scale, … in this case we just need to create extra framedelay loops to store all the data we need.
all the behaviour plugins will retrieve from these cycles just what they need for evaluation.

here i builded separate framedelay buffer but we could combine them in a unique nD vector buffer.
i pointed out in green the plugins/modules of the system; each one retrieve just the needed data.

Plugin Modularity

  • each plugin must be “minimalistic”, in the sense that it does just specific tasks. the can be integrated in the “framework” patch of the particle system, where all the data needed for evaluation are stored.

  • Input/Output of plugins/modules must refer to a standard syntax.
    eg. “Position”, “Velocity”, “Scale”, “Color”, “LifeTime”, …
    devs must try to use just these common data type in order to increase connectivity.
    as you can see in this example, all the behaviour plugins read the same Position data and output a Velocity data.
    i didn’t labeled “XYZ Position” or “XY Position” because the number of dimension it’s just a configuration of our system. Ideally plugins should work in both 2D/3D scenarios.
    eg. look at the ForceField plugin in this example: it has a 2D/3D toggle that tells the plugin if we are in 2 or 3 dimensions.

In this particle system there’s just the Position Cycle. In an advanced simulation we probably need also the Velocity Cycle to control accelerated movement. As i showed before, we’ll just add a Velocity Cycle in our patch and all the plugins that require Velocity input will get it from there.

damn… i’ve to go… i’ll continue this post later

in the meantime feel free to comment and share your ideas/critics

why can i only dream stupid things?

your dreams come …get hacked, If Natan’s graphic card stays alive

I report the post gregsn made in this thread forum/perfect-plugins-integration-into-vvvv-workflow.
it’s useful to have it here. look there for working links.


i know, one single brilliant node can be perfect on many sides (also performance), but in a super customizable and evolving environment like vvvv, modularity is the key to keep growing the things… :)

namings that you exposed completely make sense!
…finally i learn some terms!!! i was tired ti invent improper descriptive names… :D

in this ideal plugins i used Velocity output and not Position because Velocity seems to me the most raw and significant evaluation data from those algorithms.
eg. the Predator plugin just output the 2D/3D vector the boids should follow. the new position data it’s just a consequence of applying this Velocity vector to the previus position. My intent was to reduce the amount of plugins input/output as possible, so that it’s clear what they actually do.

in the end it’s up to us to decide the approach:
-make plugins/modules more userfriendly: they expose the output in many different forms (velocity,position,…).
not the perfect solution on the performance side (the plugin have to prepare all the outputs also if not used)
-make plugins/modules more minimalistic: they expose just the raw data coming from the main algorithm; it will be the user, depending on his necessities, to elaborate the output (in patch) and get other kind of data from it (like getting the newPosition from velocity: statePosition+velocity=newPosition using a “+” node like in the examples i did before).
better on perf side: they evaluate just the necessary…

i’m for the second approach, because i like the idea of having behaviour operators as minimal/clean as possible (like they were simple vvvv nodes), but i’m open to your considerations

Think it’s time to start with something: in these days i’ll prepare some example patches and modules, so everyone can see better what we are talking about and can join the discussion/development.

thanks to all and looking forward to hear your ideas :)

hey dottore!

well you could also only work with positions skipping the velocities.

Brother Behaviours

when all behaviours should contribute in the same way you could just blend them at the end (add up and divide by the count of positions). the result would be a bit different than adding up the velocities.

newpos = (pos1 + pos2 + pos3)/3 = (oldpos + vel1 + oldpos + vel2 + olpos + vel3)/3 = oldpos + vel1/3 + vel2/3 + vel3/3

your velocity based approach would however result in
newpos = oldpos + vel1 + vel2 + vel3
which might be more correct?

Hierarchic Behaviours

i am not so deep into particle animations, but i think that sometimes you want a hierarchical behaviour to be able to prevent particles running into obstacles. when dealing with positions only it is easier to patch those scenarios.

switching between both scenarios can be done faster. you don’t have resort your + nodes all the time. you typically wouldn’t need them at all since input and output are compatible.

if we would also add a new node for the blend operation for sure we can implement the same behavior like in the velocity approach. we just would need to also input the oldpos.

newpos = oldpos + a*(pos1 - oldpos) + b*(pos2 - oldpos) + c*(pos3 - oldpos)

this might end up as a brother to Mixpose (Skeleton).

sorry for that. i am not even sure what i like better. i just wanted to show that the position approach has its advantages as well.

and thanks for your great work!!!

just another nasty question: how about a 1D case. we might get some nice framebased filters with that. (not timebased like damper…)

and i just need to add my excitement that we will be able to filter out old particles that lived too long. all just by doing the state handling in the patch…

just for the reference: daves (dead) https://discourse.vvvv.org/t/6430

haven’t had a look at the code. but i’d also say that particle behaviours are hierarchical. and this is exactly the point, where coding everything into one plugin is easier than patching it. which sometimes even appears impossible, since the whole patch is calculated within one frame.

you have a swarm (particles), predators and obstacles. i’d guess neither particles nor predators should crash into obstacles.
in one frame
first the predators must avoid the obstacles
then the particles must avoid the obstacles, then predators then neighbours.

my guess, the calculation of these 3 things independently can produce wrong results.
one particle is on top of an obstacle (nearly touching) which results in upwards vector, neighbour calculation results in a downwards (slightly left) pointing vector, predator calculation results in downwards (slightly right) pointing vector.
down left + down right + up -> down.
so the particle will crash.

don’t get me wrong. i completely agree, that contributions should be as modular and functional as possible.
but i guess some things will not be possible until the states and some of the things about the loop described in vvvv-as-a-language are implemented.

Don’t see the problem of having hierarchical behaviours in patch.

as also gregsn said, you just can connect them in chain instead parallel and you will get priorities. it really depends on how you connect things.

Hello All,

In general I agree with your opinion about writhing modular Plugins.
I think that the modular construction of a complex algorithm in different Nodes/Plugins should be the best way to prioritize the advantages of vvvv.

But in some cases it could get very complex for the user and also very cpu-intensive.
Especially the Flocking behavior is very CPU intensive. The Pre Version of the Flocking-behavior Plugin runs with 60 fps until you don’t top the count of 500 Objects. With some sorting
Algorithm ( …implement it at the moment ) I think you can get 1000 - 2000 Objects. For this Plugin every Loop I can save is very important!!!


  • When you have a modular multi Plugin Flocking Behavior you have Loops for each Plugin Node witch look trough ALL flocking object. In ONE Plugin you can handle this in one major loop.
  • Maybe Another CPU stress is to Input and Output all the Data. I think these are also more Loops for each Plugin Node. (please correct me if I’m wrong )

For algorithm like the Flocking Behavior, there will be some cases you have to enlarge the simple multi Plugin construction with a lot of switches, Equals …Nodes. Or you have to arrange them in a hierarchically chain and transfer state data.

For example:

  • Each Flocking Object decides in different Situations to calculate the Separation Force with a different weight factor. When a Predator is around the Object flees as fast as it can and have to choose a different weight factor for the separation to avoid collisions.
  • Each flocking Object have to decide if himself is a flocking member or the leader of the flock. Based on this decision the Object follows the force field or not.
    -To get multiple Flocks you have to transfer flock ID’s ( or a global Bin Size) for all Objects.

–> Because of all these confusing stuff (and there will be more when you have to program it…) the modular multi Plugin construction will maybe getting to complex ( and maybe too lazy ) for some users. I Think in this case it would be a greater benefit for the users to have ONE fast and simple Node.

Pleas tell me what you think.

Greets PVC

In order to do something like this, I had already modified “monodata” to take a “mod” input so the value could be modified each frame. This allows me to make x,y,z,(n),… particle systems where not only the position of each particle changes, but other qualities such as hue, size, etc. can dynamically change as well.

This is just vvvv code, and very low-level, but it’s very simple and relatively fast. I’ll post an update to the monobuf/monodata package soon if anyone is interested.

Hello guys,
just made a first proof of concept patch that shows how this modular particle system could work.
find attached.

let me describe the nodes:


  • get the position from position cycle, get velocity from velocity cycle and apply newPosition=position+velocity.
  • maintain stability on spread count (with a getSpread); not necessary feature if everything builded well :)
  • prepare all the data to be used outside the particle system (eg data to transforms…)
  • send data via s (value) to the client for feedback.


(with server-client all the cycle wires are hidden for more clear patching. me and unc used this approach for a particle GPU system…)


  • overwrite data in the cycle
  • state initialization


  • subdivide the occuped space(bounding box of the whole group of particles) into a 3d grid (you choose the cell’s size)
  • assign to each cell the list of indices (binsized spread)of the contained particles.
  • create a list (binsized spread), for each particle, containing the indices of the particles that stay in the same cell and in the 26 neighbour ones (3x3x3 cells minus the central one occupated)
  • outputs Mates Index spread and Bin Size spread.


-gets index and binsize data from MateFinder3Dgrid and apply the flocking behaviour just between mates.
-outputs original position and “flocking velocity”
this is a stupid algorithm not advanced and raffinate as PVC’s one, it’s just a fast proof of concept. it show anyway how could be integrated with the others plugins.


  • check the distance from a Predator and compute the Escape velocity vector
    again, really stupid module… just to give the idea.


  • checks boids position and keeps them inside a spherical space.


i’m not sure to get your point, but i can’t see why you couldn’t write an “optimization plugin” in your modular particle system that provides information to all modules/plugins in order to save loops and useless iterations… as i did in my patch.

I’m not telling about breaking a complex plugin in hundreds of modular plugins;
I think that would be great to share, between different plugins, just the few informations that allow you to let the plugins interact.
the key point is (gregsn, correct me if i’m telling bullshits :D …) to share the same states between different plugins in order they can communicate.

let’s see if i understand how plugins work eheheh :
in your flocking plugin:

  • there’s an initial state
  • when the algorithm is exsecuted it reads from the initial state, and evaluates the results.
  • then the plugin write the result somewhere… over the rainbow… in some buffer, for feedback
    does what i say make any sense? :D
    if yes, i just say:
  • instead of reading from the internal buffer, the plugin will read states from input (from the patch)
  • instead of writing the final computation in the internal buffer, the plugin will output these data to the patch.

please, all of you coders, let me know if i’m completely wrong or i’m missing something :)

Let me know also what do you think about the attached patch.


VVVV Particle Library Proof.zip (41.1 kB)

how would we attribute a unique secondary color, a certain weight, a specific symbol, a special texture etc. to every particle? could this be integrated as a plugin? or maybe each particle can get an id, so additional attributes besides position can be looked up in a Dictionary?

edit: one thought i had when reading this: maybe it is time for a new general data type: Particle. that way we would have a way to code general behaviours, while still be able to extend any attributes in an object oriented fashion

I agree with you that we’ll need more attributes in a complex particle system.

  • Position
  • Velocity
  • Age
  • ID
  • custom (tag for choosing texture or other stuff…)
  • Scale
  • Mass
  • Damping

I thought about a new data type, also looking at the davis ParticleSuite suggested by joreg in this thread.
in the end it’s up to you programmers to choose the best way to manage all these particle attributes.
possible scenario:

  • new data type “particles”.
  • plugins use this particles data type => more clean patches (just a single connection between plugins of a chain)
  • there could be specific nodes that read data from this particles data type and convert these informations to values/transform/color/string; like “getPosition”/“getTransform”/“getVelocity”/“getColor”/“getStaFava”/…

I can’t stop to point out the great benefits of having a modular particle system.

  • you could turn ON/OFF specific plugins of the chain, effecting the final behaviour; in this way you could create an evolving particle system.
  • you could have an “infinite” number of different plugins in the same particle system: if they are not all turned ON at the same time, they don’t effect performances; when you turn them ON they start behaving on current state of the particle system!
  • finally a way to port all the particle behaviours out there in web, without the necessity each time to build the “entire” final behaviour in a single node; we could focus on specific algorithms and easily insert them in a working particle engine.
  • super nice way to discover new behaviour combining plugins in different ways.
    and so on…

dunno if it makes sense
re: new data type,
particle attributes: dottore list

  • randomness variation for all attributes
  • over life control for all attributes
    emitter plugin: range, angle
    shading plugin: ambient, diffuse, spectacular, reflection
    physics plugin fluid: gravity, physic time factor (over life), spin, turbulence /fractal fields, wind
    physics plugin bounce: bounce, collision, sliding, sticking, killing
    obstacle plugin:…
    predator plugin:…
    and so on
    render mode: preview/full render

a new datatype would help keep patches lightweight, clean and logical and ease the high level construction of such a system since you would only need to (re)connect few links.

also, you would only need to work with one FrameDelay (Particle); note to self: plugin API however would need to add the possibility to code nodes that allow short cut graphs.

and: you could also benefit in several re-factoring scenarios, provided we follow some rules when designing nodes that interact with the new datatype. i will try to develop those afterwards.

however the beauty of a new datatype also comes with drawbacks or challenges…

Performance vs. Staying multi purpose vs. Usability without Coding

As seen above the list of possible parameters gets long and longer and still some user will miss some particle property badly.
As soon as you want to add a parameter you need to go into c# code and therefore exclude those users that don’t want to go that far. This is the main drawback. You could handle the one or 2 “strange” parameters outside, but it would feel pretty unnatural to bundle some particle properties in a certain particle data type and work with some others outside the data type.

The other option of just adding “all” parameters to satisfy everybody will not satisfy those, who want a fast and light engine. So we are stuck and need to find a way that allows the definition of custom particle datatypes without much code duplication and without reinventing the wheel. However, still one of the many possible particle data types should be defined as a standard for a certain type of animation to allow users without c# coding experience to use the feature.

Still sticking with it

I attached a limited, but working technical sketch of how to separate between

  • a particle library working on interfaces and generic types and
  • a particle struct that implements the interfaces and “emits” the actual nodes.

It shows how to add interfaces and implement general purpose (generic) nodes, that will work for any future particle class that implements the interfaces needed to achieve your effect.
By keeping the interfaces very small you can create a datatype that only has the fields you need.
By providing getter and setters you can patch object orientated and are never forced to feed data into a pin that you currently don’t care about. no need to feed data from a split to a join. just set the one property you need to set…
There is no complex plugin realized in this system so far. But it would be easily possible. Just like the setter nodes one would have an ISpread as input and output and then set some restraints on T. (where T: IRotatable, ILocatable, …)

(note that the serialization -> framedelay (string) -> deserialization) is the most hungry part of the patch and also required some c# coding that could vanish as soon as framedelays are codable in c#. designing own particle datatypes then is only a matter of little c# plug&play.)

There is maybe more to say about it. But i don’t want to bore ppl, so just ask if you want to contribute to that approach/take it from there or just want to discuss the theoretical stuff…

Particle Demo.zip (21.1 kB)

please note that all the small effect patches are purely functional. Only that way it is easy to throw away particles at anytime from anywhere within the spread…

@dottore: i like the idea of having no cyclic patch link. and the visual outcome! hope that someone will go from here and translate your modules into c#!

just wow!!! :D
i was starting thinking nobody was interested anymore in this idea…
great gregsn!!!

love your sketch patch.
i really like the idea of let every vvvv (not coder) user be able to “interact” with particles, using simple nodes like getPos/vel to build modules.
I think also this could be an easy and fast way to prototype more complex coded particle plugins.
this is my (not coder user) point of view, but i hope all coders will enjoy this new approach to vvvv particles and the possibilities it brings.
i’ll play a bit with your sketch patch and let you know anything that comes in my mind.
ciao! :)

massively interested, but way over my head :)