Parse Texture2D (DX11) pixel data

Any good examples of parsing working with texture pixel data? I’ve already looked at

but I can’t find anything that gives me a good example of actually doing calculations based on texture pixel data and creating an output stream or texture. I’m trying to achieve a precomputation of a texture which will output two other textures.

So far, I’m trying to use the PixelData node to generate a stream of bytes, feed that into a C# node which reads the stream into a buffer, and then processes it.

I’ve also considered using compute shaders to do this instead, but getting the logic to fit into their parallel execution behavior is something of a headache.

Also, why is the stride for something like a 1x1 Texture2D listed as 128 bytes by PixelData? That seems excessive even with padding…

Hey, well this type of stuff is a little bit hard…
The compute shaders are 99% way to go with processing textures, otherwise you have to use readback and that’s a huge overhead. There are also few tricks made specially for image processing on GPU, so maybe you can star by beeing more specific on what you are actually trying to do…

this one has quite nice example DX11 get darkest pixel from a given texture

Thanks for the reply!

otherwise you have to use readback and that’s a huge overhead

What I’m proposing is to do the processing on the CPU (a C# node), so no readback is involved. In fact, it’s the opposite - I would likely be using a readback if I were to use a compute-based approach. Also, this is merely an offline precomputation, so performance isn’t a huge concern for me. One and done, not per-frame.

being more specific on what you are actually trying to do

I’m processing an environment map to use for importance sampling in a monte-carlo based pathtracer. It involves extracting and summing up luminance across the image, among other things, which means big arrays (4k+ texture resolution), and each compute shader would need access to the shared sum, so I can’t even really use a groupshared variable approach, since one workgroup won’t be big enough. I’d have to split the computation up into multiple passes and do it that way.

this one has quite nice example DX11 get darkest pixel from a given texture

Thanks! I did see that earlier while searching. It’s a bit trivial for my use case, though - my compute pipeline would involve multiple passes, shared memory, and maybe even some fencing, so it’s quite a bit trickier.

Readback is the operation to read GPU memeory in CPU, and Texture is GPU resouce… so…


well the earlier example does exactly that, but it’s looking for the least bright pixel, therefore you can use interlockedAdd (InterlockedAdd function (HLSL reference) - Win32 apps | Microsoft Learn)

here you can look on two that examples, at list it will give you an idea how you can luminance

this is’t helpfull, unless you can write down exact steps you want for operation, it won’t really work… (10.9 KB) that’s calc’s total brightness of pixels

VertexCount (DX11.Geometry).zip (4.9 KB) that’s how to use multiple slices

Thanks again for the reply!

Readback is the operation to read GPU memeory in CPU, and Texture is GPU resouce… so

Here’s a thought experiment if you don’t believe me. How do you think textures are created in the first place? Do you think a GPU or a readback is involved? It’s just data. If you upload that data to a shader, then it becomes GPU memory.

For example, the AsRaw texture node converts a texture to a raw bytestream. There is no GPU involvement, and thus no readback. I’m trying to do something similar, just using the CPU. (10.9 KB) that’s calc’s total brightness of pixels

Thanks for the examples! I am pretty aware of the InterlockedAdd functionality and how slices work in VVVV. Getting my computation to work on compute will be difficult, but I don’t need somebody to implement the research paper for me, I’m just looking for help on how to modify textures in-place in CPU memory using C#.

hi @polyrhythm,

if i get you right, you are searching for a neat datatype that let’s you get and set arbitrary pixels in an image so that you can implement you preprocessing algorithm…

i’m not sure why one of the basic image datatypes like bitmap don’t work for you, but you might want to try the imagepack which is based on emguCV and therefore offers quite quick pixelwise operations as seen here

drawback here’s maybe that it’s a bit tedious to gather all the references to make the above thing compile.

so, you can do the same with vl, which also has a growing opencv-wrapper. a quickstart might be this:

if you want to write in c# instead of patch vl, you could do that similar to the first example, but with the methods opencvsharp exposes and reference opencvsharps assembly. then compile to a dll, which then can be used as a node in vl. i guess @ravazquez can easily give usefull instructions on how to do that, since he also has written some nodes for vl.opencv in c#

Thanks @sebl I’ll give that a try. Also a great excuse to try and get some use out of VL, which I haven’t messed with much yet.

Hey @polyrhythm, if you need to write C# nodes for VL to take advantage of OpenCV’s operations, have a look at OpenCVSharp and its example library. This is the C# wrapper we are using to tap into OpenCV’s power.

Once you know what you need you can basically make your own dll with the functionality you need hosted in static classes/methods (easiest and fastest way to get them into VL) and then reference the dll from your VL document.

For reference you can have a look at how we did it for VL.OpenCV here also.

And of course hit me up here on on riot if you need any assistance.


1 Like

Also for a more general introduction to working with C# and VL have a look here:

Thanks for the links @ravazquez!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.