Dx11.particlesystem emit random 3D Objects

hi all,

I want my particle system to emit random 3d from a pool of 30 pcs. how is this best approached? my particles already have the ids, the objects are low poly with uvs and textures. I do not want to group 30 gouraud shaders and render each object seperately. is there a better way? any hints appreciated.

cheers

Just put all your 30 pieces as spread in to the instancer and then where you sample buffer do something like [particleId % instance count] on it

thx, I had a look at your “GeometryBuffer (DX11.Particles Instanced)” Did you mean that? the “Geometry buffer Debug” draws a pointlist, but which other shader takes that data? I cant get it to work. I also had a look at the instancing helpppatch, but when I feed a spread of geometry into it, it just draws the first one.

GeometryBuffer (DX11.Particles Instanced) this actually different approach…
I did few test’s and it seems bit not that easy to do… Specially Cons Geometry don’t work with IndexIndirect instancer.

Need to do few more test, don’t have tiam atm…

@tmp told me he is too busy to help, so maybe you can make my 2 cents .worth your while, even though I am not as fluid in hlsl as anthokio or others.

if you want truely random object assignment, you have to add another field to the particle and a pin to your emitter(like geometryIndex) and provide a randomized ValueBuffer to your Emitter to define the field at the particles birth) .

To be able to do what you want, you have to put all 30 geometries into one big one and put the information, where the data for each individual geometry starts and ends into a buffer (pretty much what instancenoodle from @everyoneishappy is doing with its IID mechanism). .

then you can use this information in your gourad shader (again, probably very similar to instancenoodles, plus the access via iid[geometryIndex]).

I’m quite new to shaders… this is getting more low level than I am used to :)
Did I get it right?

  • Make *.dae with all objects as separate subsets
  • Get the vertex count for each object
  • Feed the whole spread of geometry into the shader
  • Have vertex shader just process vertices that belong to one of the models by vertexcount and particle iid

Textures will work the same way, I pick them out of an array by iid, but what about the uv coordinates of each model?

Not urgent, just want to improve a running project.
Thx for your time, guys!

@velcrome, i think that should be possible with some small modification:

I guess your geometry index is a way to go…
Don’t think there is much other then that, from what i see
Baiscally right now i use file with 3 subset, so what will it do is sample all buffers against first subset, then against second, then against third. So on the screen thay will be merged…

All we have to do is somehow split incoming buffer by amount of subsets…

Test model if you wanna try play around with itparticle.zip (6.1 KB)

There we go ;)


MultiGeometryGouraudPoint.zip (16.1 KB)

Very cool, thx!
Works also great with textures when sampling from an array.

As I understand the whole shader is computed once for each incoming geometry spread, and if the particle id does not match the geometry index of the present draw call it gets discarded. Is that correct?

	if ((particleIndex % GeomCount) == GeomIndex)
	Out.Draw = true;
else
 	Out.Draw = false;

So basically one draw call per geometry subset? So would it be possible to do this with a geometry shader in just a single draw call? Or which performance advantage would discarding in GS give me? On my laptop I’m just GPU bottlenecked.

Would the GS approach mean feeding in the geometry in one piece and sampling the subsets by vertex count? And concerning UVs… would I have to lay out the uvs of all subsets within one large texture?

Would be nice if you could elaborate on that.

Cheers

Yea, that’s totally right, but how that’s done right now is totally not the most performant way
So let’s refactor a bit:

Instead i would do in vertex shader line like this:
Out.GeometryIndex = particleIndex % GeomCount;
then instead of having
if (!Out.Draw) discard;
in pixel shader (witch is not preventing geometry from beeing drawn)
you need to do “disacrd” in GS instead, that would be faster…

About, geometry buffer and all that, i think it will overcomplicate stuff quite a bit, but, maybe that would be faster…
well we merdged subsets in to one and now we have subsetId in input semantics, we still have to do (particleIndex % GeomCount) == In.subsetId) then discard in GS
in the end what it will do, is test for each buffer slice each vertex of your merged geom…

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.