First, it would probably help people help you if you can make an example patch that does not depend on having a kinect2 plugged in (unless you need it for your question).
You may need to rethink your workflow a little if you want to use noodles in this way. Those particle filter nodes are using indirect dispatch under the hood as far as I know. This means that your spreadcounts/buffersizes are changing frame to frame & are managed out of sight on the GPU. This is not currently supported in noodles (hint is in the name actually- ‘attributebufferfixedsize’’).
So you have a few options:
-prototype your effect with noodles, then rewrite the modules you need with indrect dispatch
-simply make more particles then you will need and for example scale the ones that don’t meet your criteria to 0
-wait for noodles to support indirect dispatch. No promises- I’ve actually got that working but won’t add it unless can do it in a way that doesn’t adversely affect the packs overall performance/ease of use
found kinect2gs so no more need for the particles part,
to state more clearly, the goal is a simplified and stable mesh of the kinect user
found out how to reduce the size of the pointbuffer with getslice, but as far as stability goes,
why is there no consistence in values order between frames of the kinect point position buffer ?
if this is a hardware condition, can we have the rows sorted every frame for stability ?
like in this example
or maybe something from fieldtrip can be applied to kinect depth ?
Not sure how fieldtrip would help in this case? Or maybe I don’t get what you want?
In that case kinect2gs could probably be marked as your answer? Append buffers do indeed scramble the order at hardware level. It’s actually kind of a pain to deal with then, and not as simple as just applying a sort.
goal is a stylized / low poly version of the kinect mesh
i can only resample random points on the mesh if they stay consistent from frame to frame
the count and position of hardware laser-dots seems constant
so im thinking they get scrambled because they are not being fed in the same order every frame
so a buffer containing all this points should render similar order every frame if we could apply sorting / former index selection on gpu
if raytrace can act like a pipet for the kinect depth texture, maybe there are filed tricks to alter the mesh afterwards (ie without re-indexing)
I have done something like this by effectively deforming a grid by the depth map and then emitting from the vertices of that (and culling by distance). Its not as effective as there is no movement to the emission positions…