Hlsl, pixelshader-programming. Newbie needs help with analysing a video-texture

Hey everyone,
I am relativly new to vvvv and trying to realise a project with it. And I am stuck right now…

I need to analyse a videostream/video-texture. The texture shows a Laser-line, which is projected horizontaly on a dark object. because of the shape of the object, the laser-line is curvy.

I want to analyse the vertical position of the laserline in each “column” of the vidoeframe. (or at least in 64 colums. i.e. every 5th pixel-column in a 320x240 frame).

So far, I am able to isolate the laser-line from the background by luminance and color. (The realisation of that is my first and only experience in shader-programming…) I have no idea how to get those data. I think, I need to programm a pixelshader for that, but I dont know how.

In a crude way, it should be something like:

if (col.r has the highest value here)
       spit the y-coordinates out;
    else dont;


Can anyone help me with this? Is this idea possible to realize at all?

How should the shader really look like?


It´s not possible to get the position data out of the shader.
Have a look at Pipet (EX9.Texture) instead.

thanks alot for the info.
i will try out the pipet.

Is that a hardware limitation btw? Can shaders/gfx cards only ouput textures and not “ordinary” data?

that is what I understood out of bjoern´s post.
but maybe someone could clarify this?

Is it fact, that shaders can only operate with pixels, but cannot get any informations about the pixels and make those information available?

because design of shaders is not done for that… ( sob…)

Hinsen, what you’re trying to do might be possible with a pixel shader. If you want to detect 64 points you can output a texture 64 pixels wide by 1 pixel high. Normally a texture is packed with color data but you place location data in the texture instead.

As a side note: you might run into an issue of doing too many texture lookups in HLSL. Definitely use at least pixelshader version 3. If vvvv supports it, PS4 doesn’t have any limits on the number of lookups.

Mmmh ok, I´m sobbing too now… but thanks for the clarification.

Your idea sounds great! But how could i place location data into a texture?

FileTexture (with Laser Image) > Your HLSL Plugin -> Renderer (width=64,height=1) -> DX9Texture -> Pipet -> RGB (split)

In your HLSL plugin you loop the columns of the input image and detect the location of the laser line. The plugin outputs colors where Red==X-coordinate, Green==Y-coordinate. (although you really only need a Y-coordinate in this case)

Then the Pipet node reads the color values and RGB node splits the color values into numbers which will equal your coordinates.

Ah ok, I understand that now and I really like the idea. I think I can realize every part of that, except the HLSL-plugin… My programming skills are very little…

First of all, I dont know how to loop a column…
“Inside of a column”: for the location-detection the value of red could be used (the laser is red): where the amount of red is the highest, there is the laserline.
But figuring out the position as a number/value?
-Maybe HLSL brings a fitting method for something like that?
-Maybe it could be done with something like a sorting-algorithm? like:
counting the pixels from the bottom up until the one with the highest red value is reached. the color-value green of the corresponding pixel in the “output-texture” would be increased for every counted pixel in the “input-texture”…
-Or maybe there is some other way, that I dont see?

Translating those Ideas into HLSL-code goes far beyond my knowledge. Could you please help me out here?

That’s definitely no trivial stuff.
But thankfully our incredible @dottore shared his knowledge with us:
see particlesgpu-library-guide

thanks kalle! i ´ll have a look into that.