I was thinking of using the gpu particles with boygrouping, could I make the same behaviour running in two different computers ? for example field attraction. I don’t know if it has some random behaviour inside, besides the random generation of the particles, wich I could control with gaussian spread, I think,
I tried this a bit and wasn’t able to come to a very satisfactory solution. The whole point of gpu particles is to do the simulation on the gpu to get complex behavior and decent fps. Once you have to send an entire texture across the network (which you can’t even do with boygrouped nodes to my knowledge) you would lose all the benefit of doing the computation on the gpu in the first place.
If you really did want to do this and your simulation was complex enough to warrant it, I think you could right a custom plugin (c#) that iterated over your simulation texture and either sent it over udp or translated it into positions (that were sent via boygrouping). Using the pipet node would be waaaay too slow.
I agree with you, but i was thinking on only send bassic commands. for example, imagine using field 2d, and bang the update of the texture. the thing is: both gpus particles can work exactly the same with the attraction fields?, having the same parameters obviously…
The issue with that is that the two machines are not going to be rendering the exact same frame at the same time. Frame rate is highly variable which means that one machine could be quite a bit ahead of the other in regards to the simulation. This will cause the simulation to go out of sync quite quickly.
the key to success is to make all movement dependent to one value which the server sends. e.g. a time value. the clients may filter the value a bit and use is as the only source of animation for the particles. if then all random seeds and so on are the same, the clients will all render the same animation and stay in sync…
I’m still not sure - even if 1 client only does 1 extra frame once in a while, the other clients you’ll end up with different positions. Once in different positions the particles will never end up back together, it’ll just get worse.
i was considering this a while back
wanted a volumetric 3d fluid simulation ‘boygrouped’
the plan was to have 1 machine running the simulation, output on DVI and capture into DVI on all the clients
this only makes sense if the bulk of your calculation can be output as a ‘texture’ (and therefore rendered / reloaded on the clients)
major negatives:
you have to expect some gamma-type issues / some calibration may be necessary.
you can only transmit 8-bit uint RGB (i.e. no float values)
need to pull all the data on the clients into the CPU then onto the GPU (this isn’t as slow as it sounds though)
so this isn’t going to work with your existing works off the bat (e.g. the ParticlesGPU library)
you’d need to make alterations. (encode/decode your floats into 8-bit RGB)
advantages:
can perform a lot of the processing on the server
your clients should be very much in sync if you’re using the same capture device on each
sugokuGenki: That is actually a very, very clever idea. A DVI cable can transfer a TON of information quickly with virtually no overhead for network traffic, protocol encoding or buffering. You would need a very fast capture card and I still think the whole capture -> cpu -> gpu process would be quite slow though.
I was going to try something similar to what tonfilm suggested to make a fluid-like simulation based entirely on perlin noise. The key is that you would have to make your simulation entirely deterministic. Start both off on a boygrouped seed and then generate the entire positions of the particles either based on a lerp value between a set of textures denoting particle states and/or a set of perlin noise value. This has the limitation that your simulation would be fairly predictable (other than the lerp value sent to clients) but it should stay pretty much perfectly in sync.
Interesting discussion and very relevant to what I am working on now.