How to convert the Depthmap into a spread of vector/vertex?

With the help of@dl-110)) and user:Szaben,I can display the pointclouds right now ,by using Depth (Kinect Microsoft)and a Effect file(which refer to ((user:herbst for display the pointcloud as follow.
For control the pointcloud freely,I’m finding a way to convert this Depthmap into a spread of vector or spread of vertex.
But now i’m no idea

  • If there are any nodes will achieve it ?
  • Is possible output them from a Effect file?
  • or how achieve it with Plugin/Effect?

//@author: herbst
//@help: draws a mesh with a constant color
//@tags: template, basic

// --------------------------------------------------------------------------------------------------
// --------------------------------------------------------------------------------------------------

float4x4 tW: WORLD;        //the models world matrix as via the shader
float4x4 tV: VIEW;         //view matrix as set via Renderer (EX9)
float4x4 tP: PROJECTION;   //projection matrix as set via Renderer (EX9)
float4x4 tWVP: WORLDVIEWPROJECTION; //all 3 premultiplied 
texture Tex <string uiname="Texture";>;
sampler Samp = sampler_state    //sampler for doing the texture-lookup
    Texture   = (Tex);          //apply a texture to the sampler
    MipFilter = LINEAR;         //set the sampler states
    MinFilter = LINEAR;
    MagFilter = LINEAR;

//texture transformation marked with semantic TEXTUREMATRIX 
//to achieve symmetric transformations
float Amount = 0.1;
- define ArrSize 20
float2 Noise[ArrSize](ArrSize);
float4x4 tTex: TEXTUREMATRIX <string uiname="Texture Transform";>;
//the data structure: "vertexshader to pixelshader"
//used as output data of the VS function
//and as input data of the PS function
struct vs2ps
    float4 Pos  : POSITION;
    float2 TexCd : TEXCOORD0;

// --------------------------------------------------------------------------------------------------
// --------------------------------------------------------------------------------------------------

vs2ps VS(
    float4 PosO  : POSITION,
    float4 TexCd : TEXCOORD0)
    //declare output struct
    vs2ps Out;
    //get the color in the texture
    float4 texColor = tex2Dlod(Samp, TexCd);
    //offset the z coordinate
    PosO.z += texColor.r * Amount;
    PosO.xy += Noise[TexCd.x * (ArrSize-1)](TexCd.x * (ArrSize-1));
    //transform position
    Out.Pos = mul(PosO, tWVP);
    //transform texturecoordinates
    Out.TexCd = mul(TexCd, tTex);
    return Out; 

// --------------------------------------------------------------------------------------------------
// --------------------------------------------------------------------------------------------------

// --------------------------------------------------------------------------------------------------
// --------------------------------------------------------------------------------------------------

technique TSimpleShader
    pass P0
        VertexShader = compile vs_3_0 VS();
        PixelShader  = null;

Yo bro,

Interesting approach! I have done this before using NODES only and no EFFECTS. This was achieved by putting the Kinect DEPTH node into a PIPET node and then getting the correct depth position with the HSL node. HOWEVER my method then invovled passing all these depth posiiton(z positions) to whole bunch of quads throguh transform node. This is not very efficient (using all those quads) so the using an effect node idea is good. But I think the depth > Pipet > HSL(color split) node SHOULD answer your question as then you will get a spread of X,Y,Z positions (by combining your depth z positions to simple grid x,y position (suing the CROSS node)…

I hope this helps, and I will add this: CONSIDER USING A PARTICLE SYSTEM to represent/visualize your point cloud, this would be super efficient and can look really good, I think leCloneur did something like this(project was called FROM HELL, check on vimeo). Like him, I would use CIANT PARTICLE systems. Get it from the contribution section of this website (just google “vvvv ciant” and you’ll find it). Effectively you would have to convert your point cloud(your XYZ 3d positions) to a projected flat texture through the DYNAMIC TEXTURE node, so it goes like this…
Your XYZ spread of position > X in red input pin of dynamic texture, Y in green input, Z in Blue input… Then you can connect this dynamic texture to one of the example ciant nodes which are in the help files, the example with dynamic texture effectively, sorry man can’t remeber exactly BUT if you are struggling with CIANT let me know as I use it all the time and quite fluent with it.

FINALLY: There is actually a special module/bunch of nodes in CIANT contribution ESPECIALLY MADE FOR KINECT POINT CLOUD VISUALISATION!!! However I personally could not make it work, and had to use the more straight forward dynamic texture method… Can’t remember why, but anyway if you can make it work that would be the ideal solution as this CIANT module is espeically developped to do kinectpoint cloud representation. I hope it helps bro, iuf any of this is confusing, please let me know and I will try to clarify things… Safe

I think the pipet-(ex9.texture) Node approach proposed by @evvvvil is definitely worth a try - the only downside is that Pipet requires a lot of processing power. And I also agree with him proposing to use a particle system. That’s what they are there for =)

I just wanted to add something that came to my mind while reading this thread: You could probably change the shader written by @herbst… I’m not 100% sure if that works but at some point the shader has to calculate the offset of the particles , right?

Probably here

//offset the z coordinate
PosO.z += texColor.r * Amount;

So all you would have to do is to output the spread through a new pin. That approach of course requires you to know a bit about shader and plugin coding in vvvv. And I am not sure if you can output a spread from a shader or if that’s only possible in a plugin.

@dl-110 is correct the pipet is a little processor heavy, it can work really well still though. Using a shader will be GPU accelerated obviously so that’s good, I think it doesn’t matter how you gather your point cloud XYZ data, you can use pipet method as I explain above or you can use a shader, but it’s what you do next that’s important(with particle system)… So you could JUST get your shader to get the positions of vertices and then pass that to dynamictexture and to ciant particle system, or like I said pipet method and then again dynamictexture > Ciant.
Sorry I am new to shader programming, and only learning it all now but I “think” that you can kick out spreads, which could be the XYZ vertices pos into dynamictexture and ciant. Anyway let us know how you get on. Peace