Question of pointcloud

hiya,Iam a vvvv beginner.
I am trying to use Depth (Kinect Microsoft) and a “VectexData.fx”(which refer to @herbst)to display the pointcloud.
But now the obtained results are as follows.
It is obvious I have failed.
Hope someone can help me to achieve the pointcloud with vvvv.
thanks

//@author: vvvv
//@help: draws a mesh with a constant color
//@tags: template, basic
//@credits:

// --------------------------------------------------------------------------------------------------
// PARAMETERS:
// --------------------------------------------------------------------------------------------------

//transforms
float4x4 tW: WORLD;        //the models world matrix as via the shader
float4x4 tV: VIEW;         //view matrix as set via Renderer (EX9)
float4x4 tP: PROJECTION;   //projection matrix as set via Renderer (EX9)
float4x4 tWVP: WORLDVIEWPROJECTION; //all 3 premultiplied 
//texture
texture Tex <string uiname="Texture";>;
sampler Samp = sampler_state    //sampler for doing the texture-lookup
{
    Texture   = (Tex);          //apply a texture to the sampler
    MipFilter = LINEAR;         //set the sampler states
    MinFilter = LINEAR;
    MagFilter = LINEAR;
}; 

//texture transformation marked with semantic TEXTUREMATRIX 
//to achieve symmetric transformations
float Amount = 0.1;
- define ArrSize 20
float2 Noise[ArrSize](ArrSize);
float4x4 tTex: TEXTUREMATRIX <string uiname="Texture Transform";>;
//the data structure: "vertexshader to pixelshader"
//used as output data of the VS function
//and as input data of the PS function
struct vs2ps
{
    float4 Pos  : POSITION;
    float2 TexCd : TEXCOORD0;
};

// --------------------------------------------------------------------------------------------------
// VERTEXSHADERS
// --------------------------------------------------------------------------------------------------

vs2ps VS(
    float4 PosO  : POSITION,
    float4 TexCd : TEXCOORD0)
{
    //declare output struct
    vs2ps Out;
    //get the color in the texture
    float4 texColor = tex2Dlod(Samp, TexCd);
    //offset the z coordinate
    PosO.z += texColor.r * Amount;
    PosO.xy += Noise[TexCd.x * (ArrSize-1)](TexCd.x * (ArrSize-1));
    //transform position
    Out.Pos = mul(PosO, tWVP);
    //transform texturecoordinates
    Out.TexCd = mul(TexCd, tTex);
 
    return Out; 
}

// --------------------------------------------------------------------------------------------------
// PIXELSHADERS:
// --------------------------------------------------------------------------------------------------

// --------------------------------------------------------------------------------------------------
// TECHNIQUES:
// --------------------------------------------------------------------------------------------------

technique TSimpleShader
{
    pass P0
    {
        VertexShader = compile vs_3_0 VS();
        PixelShader  = null;
    }
}

pointcloud of kinect.zip (6.4 kB)

Hi carllx!

Unfortunately I don’t have a Kinect right now so I can’t recreate your setup. But I feel like your Kinect image looks pretty dark…
So I tested your patch with a Kinect depth image I found online and everything seems to work =)

As you can see I simply used a filetexture-(ex9.texture) node and connected the texture output with the for_cloudpoint node and voilà - pointcloud =)

Maybe you wanna try the same. That way we can pretty much limit the problem to the Kinect or the Kinect node.

Cheers!

Pointcloud with static depth image (228.6 kB)

Hi,

i’ve made myself a setup for pointcloud using 2 user distributions, one of which i cannot find anymore nowadays, unfortunately.

here’s the setup. neatly arranged and commented… but not properly tested yet, since i don’t have my kinect at home now. it should work though, always did for me.

oh if you want to try fix your setup, might want to try using a changeFormat node on the Kinect DepthImage. It’s L16 grayscale, usually, i had that being problematic once.
(i think it shouldn’t in your case… but doesn’t hurt to try.)

oh, and: make sure you have the addonpack installed, and that you use vvvv 32bit (since a lot of nodes from the addons still need to be transported). stuff that you find and download here depends on it all of the time.

k, good luck. let me know if any of this worked for u

Kinect Depth 2 PointCloud.zip (84.7 kB)

Hiya,@dl-110)) and ((user:Szaben ,Thank for trying to give me a hand in case you do not have kinect .
At present I got the depthmap which formated “L16 grayscale” by using Depth (Kinect Microsoft)
AS @Szaben)) said, the format “L16 grayscale” does not the correct way to get the pointcloud. I’ve tried all different formats by using ChangeFormat (EX9.Texture),but it seem to be not possible to get the correct depthmap as Iwanted yet (which showed by((user:dl-110 above ).

Coincidentally, I also got the patch ( sites/default/files/user-files/KinectD2XYZ.zip )which look like @Szaben gave me above. After testing it, I can get a nice depthmap by Adjusting the parameter of " MinDepth “and” Max Depth from “KinectDepthGray” node ,as showed follow .
1.But I’ve not idea what basis the value of “Min Depth” and “Maxdepth” depending on ?
2.If I can get the vector of arrays from a Template (EX9.Effect)

Thsnks,@dl-110)) and ((user:Szaben again,I can display the pointclouds rightnow.
For control the pointcloud freely,the nexct step will be finding a way to convert the Depthmap into a spread of vector.

Hi carllx,

glad you’r making progress.

though, in your setup, are you sure you want to set renderMode to “Point”? cause, what “Point” does is it only displays the corners of a Mesh - and depending on how your PointCloud Shader works, it might produce 3 ghost-points for every Point you really want to have rendered.
Try setting it back to “solid” and to control the Point’s size in another way.

so then…: getting a spread of Vectors… you can use the “KinectD2RGB” shader, and then you just use “Pipet (Ex9)” on the resulting Texture. Each Pixel represents a 3D-Vector within the pointcloud (r=x, g=y, b=z).

But beware! handling your PointData in a Shader, or in a Spread of Vectors, will result in immensely different performance. While your Graphics card (Shader approach) can handle up to millions of points a frame, your CPU (Spread approach) surely can’t. I think a good machine will have enough power to render a Kinect Pointcloud on the CPU, but you might need to downscale the resolution - a lot - to get decent performance.

for more information on how to control a fully GPU based particle system, check out
particlesgpu-shader-library

…this is sort of advanced stuff. but if you wrote the shaders you are using yourself, it might be just the right address for you.

__
do you just want to resize/move the entire Pointcloud, or you want to have individual Points react to something? …maybe you define your goals more clearly and i’ll be happy to help you out.
I myself had (still have) a project where I want all of the Points from Pointcloud to interact with a second Set of Points in Terms of Attraction; but i got lost in the immense load of calculations necessary.
There are ways…! as i mentioned, you either have full GPU-based particle system, or you reduce your Point-Count drastically.

cheers
Szaben