No it’s geometry.
You have ps layers when mix textures.
DX (but ogl the same), does “more” than ps when it comes to blending, in the sense that you not only can blend the pixels of textures, but you can blend also the pixels that represent the geometry (on which the textures are applied).
That’s why you have Blend Renderstate and Blend DX11.TextureFX, first it’s for geometry, the other for textures.
Why don’t you create three renderers: in the first you draw one red [1,0,0] and one cyan [0,0.5,1] quads and you animate them (rotating on one axis) and there you use Blend renderstate, in the second just one quad with changing colors; of these first two renderers you use the texture they output to blend them with texture fx, and the result you would feed to a fullscreen quad to be seen in the third renderer.
Have a few nice hours.