some time ago I stumbled upon this post from SideFX’s instagram :
just out of curiosity, how would that be doable in vvvv ? i’m not talking about the animation but just the projected splines on the mesh, as seen in the screeshot above.
You could fake it using a shader to cut out thin lines, for which you could make use of the topographic shader in contributions, making them 3d splines would be trickier… Although @everyoneishappy made some similar with particles and tangents which might work if you could mesh them…
I can think of a couple ways of doing this. For a general purpose approach I’d create a distance field from your mesh surface, then cross the gradient of that with a noise gradient. At lower frequencies will look pretty close to your reference.
Here’s what a FieldTrip graph for that would look like:
Otherwise, if you need it to be a bit more specific to your mesh, or don’t want to have to convert your mesh to a volume first you could probably create a couple of spline paths inside your mesh and project those radially on to the surface (use front culling rather then back culling). Like a bunch of 1px viewports.
Hope that helps for some ideas. Can pack up that patch if you want it.
@sebescudie Most welcome. I prefer a little noise, but you can also use a UniformVector(VF3D.Sources) rather then the noise if you want the lines to run around an axis. If you’re interested in that cross gradient trick can check out this paper: http://martian-labs.com/martiantoolz/files/DFnoiseR.pdf
It’s also the basis for the DivergenceFreeNoise (VF3D.source) node
@everyoneishappy quick jump-in regarding “virtual scanning”, would it be feasable to use fieldtrip for resampling the kinect mesh into a low-poly and more stable mesh ? (as it stands, the kinect mesh has no coresponding information between frames which makes it impossible to reduce poly count)
@ggml You can look up sdf gen for an external tool. I do actually have a conversion patch as well (also not real-time), may clean it up and add it to next version of fieldtrip since a lot of people were asking about it. Honestly I don’t think it’s the way to go for what you are talking about though. Having said that you can make a sdf from depth sensor (iirc that’s actually how kinect fusion works), but it’s more of a progressive scan sort of thing, and quite finicky
@ggml yes probably better off as a separate thread. It’s not a realtime task. Can sort of be, but that’s more in the context of moving the sensor around a scene to build up a surface reconstruction from successive frames. Just look up fusion if you want to know more about that.