Shader Description for Newbees

Please review! This is for the vvvv beginner course and it’s how I’d describe the whole matter. I thought it would be a good time to hear from you if there are concepts that I got completely wrong before forwarding the alternative facts to others.

Shader Programming with vvvv

Shaders are programs that run on your graphics card (GPU) instead of your central processing unit (CPU). They were originally used when real-time graphics contained many objects and should be calculated quickly. By now they are used to bring basically anything on display in real-time graphics.

GPU and CPU Programming

On the CPU you can not run multiple tasks truly parallel. Even if your computer is a quadcore (= 4 CPUs) the tasks need to be at least scheduled and sorted to the multiple cores.

The GPU is used for parallel processing. You can atomize your program into the smallest tasks and let them all execute one small thing at the same time.

Example Fragment shader
It’s a program that calculates the colour of each pixel on your screen for one frame. When you use Full HD, so 1080x1920 pixel resolution, you have about 2 million workloads, which the GPU can process massively in parallel




(Images by https://thebookofshaders.com/, a useful resource)

The logic of shader programs is different from the CPU programs we wrote so far.
The single entities = Buffer Ressource Elements (pixels, particles,…) don’t know anything about each other. The only thing they know about themselves are the attributes you attach to them (Position, Size, Velocity, Color,…)

In a Fragment- or Pixel Shader, the fragments only know their own position. The art of writing a good shader is using math in a way that you can use this information only to write a function that evaluates colours differently for each pixel to create an image.

Check this out to see how creative you can get: https://www.shadertoy.com/

This shader artist brought writing math functions to calculating aesthetic 3D scenes to a new level: https://www.youtube.com/@InigoQuilez

Fragment, Vertex and Compute Shaders

Fragment Shader: Calculate the color of each pixel on the screen.
Vertex Shader: When calculating a 3D scene, this shader comes in an early stage of your computer’s graphics pipeline. It calculates colour by vertices.
Example: The brightness of a vertex according to the angle of the light and the amount of light that reaches it.

unnamed

Compute Shader: This is detached from directly calculating graphics. It uses the logic of shader code to calculate millions of tasks at the same time and just uses the GPU’s infrastructure. These shaders are used to accelerate AI tasks or in our case, we use them for particle systems.

Signed Distance Field

A powerful way to make use of the fact that each element only knows about its attributes are signed distance fields or SDFs:

Think about how you would draw a white circle on a black background with a fragment shader. You decide the centre point and the radius. Your shader code then only contains a function that calculates the distance from each fragment to the centerpoint. If it is below the given radius it should be drawn white and otherwise black.

You can think of SDFs as functions that lie in 2D or 3D space. Your element „subscribes“ to the function when its attributes allow it to do so. By that, you don’t have to calculate 3D scenes as meshes or groups of objects, but they just appear as whatever you want them to be according to your shader code.

7e397cea-0aa7-49a2-8311-840b94423f03

2 Likes

Looks pretty good and explains the basics. From the first look, my brain had the question of why SDF is directly introduced as a part of a newbie intro?

The graphics and compute pipeline practically applied to the vvvv shader pipeline can be learned in my hands-on workshop here:

1 Like

nice read. just some nitpicking ahead

the “many objects” is correct, however, no instancing or similar thing is mentioned below.
Second sentence maybe something more general like “They are used to bring anything on display in realtime graphics.”
Esp. in ‘modern’ graphics apis everything is done with shaders (no more fixed functions like dx9), it’s just often hidden away for usability.


GPUs by far don’t run these numbers simultaneously (yet). The (fragment) shader programming model just provides this abstraction.
"When you use Full HD, so 1080x1920 pixel resolution, you have about 2 million workloads, which the GPU can process massively in parallel.

1 Like

Thanks so much, I changed the second one.
The first change becomes then too broad for me.

I think I’ll do a combo:
They were originally used when real-time graphics contained many objects and should be calculated quickly. By now they are used for bringing basically anything on display in realtime graphics.