Kinect Particles to Noodles

[edit first three post no longer relevant due to strategy change]

looking at sending pointcloud from kinect to noodles

  • attributebuffer
  • edit
  • stabilize

[first question was why is this patch blank (solved, was missing spreadcount) ]
kinect to noodles.v4p (12.7 KB)

1 Like

where on the way between emitter and noodles to reduce the pointcloud ?

  • via box intersecton (crop away room)
  • via resampling every n points (rarify)

atm,

  • worldfilter does not reduce the actual point count when passed thru attributebufferfixedsize
  • particles seem to age indefinitely in noodles part

edit kinect pointcloud.v4p (22.4 KB)

First, it would probably help people help you if you can make an example patch that does not depend on having a kinect2 plugged in (unless you need it for your question).

You may need to rethink your workflow a little if you want to use noodles in this way. Those particle filter nodes are using indirect dispatch under the hood as far as I know. This means that your spreadcounts/buffersizes are changing frame to frame & are managed out of sight on the GPU. This is not currently supported in noodles (hint is in the name actually- ‘attributebufferfixedsize’’).

So you have a few options:
-prototype your effect with noodles, then rewrite the modules you need with indrect dispatch
-simply make more particles then you will need and for example scale the ones that don’t meet your criteria to 0
-wait for noodles to support indirect dispatch. No promises- I’ve actually got that working but won’t add it unless can do it in a way that doesn’t adversely affect the packs overall performance/ease of use

found kinect2gs so no more need for the particles part,

to state more clearly, the goal is a simplified and stable mesh of the kinect user

found out how to reduce the size of the pointbuffer with getslice, but as far as stability goes,
why is there no consistence in values order between frames of the kinect point position buffer ?
if this is a hardware condition, can we have the rows sorted every frame for stability ?
like in this example

or maybe something from fieldtrip can be applied to kinect depth ?

@ggml What is fieldtrip? The matlab toolbox?

@h99 its a pack prereleased by @everyoneishappy at his node workshop. it deals in modular functions, fractal sums, distance fields, field lines, raymarching, volumes

@ggml Thank you!

Not sure how fieldtrip would help in this case? Or maybe I don’t get what you want?

In that case kinect2gs could probably be marked as your answer? Append buffers do indeed scramble the order at hardware level. It’s actually kind of a pain to deal with then, and not as simple as just applying a sort.

goal is a stylized / low poly version of the kinect mesh

@sorting option
i can only resample random points on the mesh if they stay consistent from frame to frame
the count and position of hardware laser-dots seems constant
so im thinking they get scrambled because they are not being fed in the same order every frame
so a buffer containing all this points should render similar order every frame if we could apply sorting / former index selection on gpu

@fields option
if raytrace can act like a pipet for the kinect depth texture, maybe there are filed tricks to alter the mesh afterwards (ie without re-indexing)

kinect mesh or kinect user? if it’s the user I would take the user texture and run some analysis on that to guide your sampling perhaps.

As for the dots you are talking about that’s from KV1, KV2 used completely different system afaik. In any case it’s a calibration pattern, not the actual sample points themselves.

The points are constant as far as where they are pixel wise in the frustrum no?

I must admit I didn’t really understand your raytracing idea :P

I have done something like this by effectively deforming a grid by the depth map and then emitting from the vertices of that (and culling by distance). Its not as effective as there is no movement to the emission positions…

In the particles pack is a node called Mesh (DX11.Particles.Kinect).
Is that what you want? You can lower the number of polygons by decreasing the resolution.

thanks for replies,

the simpler description of resampling the kinect mesh would be this image

yes that is exactly what you can do with the Mesh (DX11.Particles.Kinect) node.
there is also a Geometry (DX11.Particles.Kinect) node that outputs the geometry for the usage with different shaders.

be sure to download the latest dx11.particles pack from the github page.


1 Like

are there ways to redo the topology, in order to work with more random(?) triangles (as above), instead of the extruded 2d squares ?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.