UV coordinates in prerendered footage - SRGB issues

hi all,

I need to put faces into prerendered footage (fast) and idiot proof. My plan goes as follows: I create a texture atlas containing the faces. Positions are predetermined. The footage is rendered with uv coordinates instead of the faces.

I prototyped this, but it seems I got problems with srgb. I have a shader outputting clean uv coords, transform them from srgb to linear before writing a bmp.

In my AE composition, I animate my UV coordinates. I exported it as uncompressed AVI.
Playback gives me SRGB again, so I convert it back to linear before inputting it into my shader.

I guess that there are hidden colour conversions I don’t know about.
When I skip all the srgb to linear stuff, it works, and it looks like the expected srgb gamma issue:


I did not vote for this guy.

not sure, if that is the primary issue, but R8G8B8A8_UNorm is not really a good format to work with uvs.
It gives you a resolution of 256 steps per channel, and you waste 16 bit per pixel on unused data.
R16G16Float would be ideal, uv gradients should be smoother. basti would probably not look pixelated and scrambled.

Thank you for pointing that out! yes of course. Btw how can I set a stride pixelshader to 16bit? does it work like this upstream, or do I have to declare it in the shader like in beta?


follow me for more recipes

Right-Click your shader and enable “SetOutputFormat”.

1 Like

you would also need a video codec that has a high bit count per color channel and also stores the colors in linear space. I don’t know if saving as bitmap will work either in 16 bit, I think bitmaps only know 8 or 32-bit, but not sure on that one. you have to inspect the bmp with gimp or photoshop…

OT: @tonfilm what is the RenderFormat doing?

It affects how colors get written into the texture by the shader.

The texture has two use cases, a RenderTargetView (RTV) where the shader renders into, and a ShaderResourceView (SRV) where other shaders read from. OutputFormat is the SRV and RenderFormat is the RTV. They must match in bit size, because the underlying resource is the same, but can have different formats otherwise, including different srgb flags, which is what is happening when you set DontConvertToSRgbOnOnWrite, for example.


so you actually have to use both of them. set bit depth for the shader and then change the texture format.


no, as soon as data is squashed into 8 bits you can’t change it to 16-bit. setting output format is enough, the render format is derived from the output format if it is set to none and DontConvertToSRgbOnOnWrite is not used in the shader.

no I mean when i set the shader to 16bit float it still writes as 8bit dds


as I said, you need to set the output format first, then the render format if you really need something different than the output format. don’t get confused by björns question. you almost never have to touch the render format setting.

^^ ok I just overlooked the output format pin, got it!
float16 is the correct format? my dds won’t open.

maybe it needs 32 bits then? or another software? but the dds pixel formats are usually the dxt or bc ones…

32bit does not work either, same error. I tried phtoshop, gimp and xnConvert. 8bit unorm_srgb dds do open. maybe they do not support 16bit dds as float? any ideas how I can get back to tiff or tga for after effects?

Or can I do this directly in after effects somehow?

Try https://getpaint.net

Or of course TexConv. The executable comes with TexConvGui. It can also output TIF and TGA for example.

paint.net can open the 16 bit dds, but just save as 32bit.
tried exporting 32bit dds, but when saving as 32bit tiff gamma reads it as 8bit…

thank you for the tipp, I think I will have to check out texconv.
and then next week find the after effects filter :D

hey guys, thank you for the support!

I managed to create the uvs using blender;

looks weird, but when exporting without gamma corrections its correct.


some nerd facts about dds and bc compression ahead:
started digging, because I remember having used 16F and 32F formats with dds successfully.

dxt/bc compression does indeed not support RGBA16 or 32bit float. the best one can get is bc6h, an hdr format with rgb16 float.

the dds fileformat, however, does list R RG & RGBA 16/32_Float, but with a * saying:

  • = A robust DDS reader must be able to handle these legacy format codes. However, such a DDS reader should prefer to use the “DX10” header extension when it writes these format codes to avoid ambiguity.

according to this wikipedia table
16 bit should be supported by BMP, PNG, TGA & TIFF
32 bit: BMP, TGA & TIFF
(just chose the ones commonly used around here)