I found out that it should be possible to do single-pass multitexturing, which gave me the idea to try to write a (test) plugin, that could blend 2 other textures together, depending on a given blend factor.
What i do is, set the texture that the node outputs as rendertarget, and then draw a quad (trianglestrip) to it, that should have 2 textures mapped to it.
Now I have been trying to get it working, but for some reason i can’t get the texture coordinates right.
In the attached test-file, you will see what the texture should look like on the RIGHT, and what I am currently getting on the LEFT.
I added some inputs to the node that influence the texture coordinates, when creating the quad. Switch textures will show you the other texture (the other one I want to blend together with the first one eventually).
(Enable blend, and blend level do work to a certain extent, but the second texture doesn’t show either, I suspect this might be related to the texture-coordinate problem too.)
So the question to anyone who has a good understanding of direct3d is: what am I doing wrong that the texture-mapping doesn’t take my texture coordinates into account (see CreateQuadTexturedVertexBuffer function and UpdateTexture function). I’m stuck…
The weird thing is, when I change the Y component of the TextureCoordinate, I can see the texture (that is skewed and rotated 90 degrees) move up and down, but changing ANY of the X components of the TextureCoordinate doesn’t seem to have any effect at all.
That’s why I really don’t understand what I am doing wrong, it’s as if somewhere it’s reading the wrong parameters…
You can look at the code, it’s included in the file I added to my first post. It’s basically no more than “setup the world, set the texture, draw the quad”.
I am drawing a trianglestrip (like most examples I googled => no indices needed I guess?), culling=off, projection matrix = Ortho, View Matrix = LookAtLH from 0,0,-2 to 0,0,0 (up vector 0,1,0)
I explicitly set texture transform to Matrix.Identity
I think I do everything ‘by the book’, but since I am not a D3D expert, I might have forgotten something. That’s why I posted here, because I am out of ideas of what I did wrong. I really don’t see the flaw in my code…
I have been messing around with
device.SetTextureStageState(0, TextureStage.TexCoordIndex, 0); //when I leave it out, behaviour is the same
but this means, “use the first TextureCoordinates that belong to the vertex” (there can be many, but in my case there are only 1), if I am not mistaken. So I would think this means the texture should use the TextureCoords I set…
took me a while to sift through your code. seems you are only missing one line:
d.VertexFormat = TexturedVertex.Format;
is that it?
if so, i still don’t get what you’re actually doing here.
as i understand what your code does can be quite easily done in a pixelshader. since you’re rendering to a rendertarget internally this is the same pass that an effect would need to blend textures together and get the result via DX9Texture (EX9.Texture), right?
Joreg, thanks soooo much for looking at the code. I owe you.
I looked through all the d.SetXXX things, so I looked over this one, because it’s not a setter. Didn’t come across this by googling either. I like getters and setters, they make it clear which options can be set.
What you say is true, of course, but I want to use this internally in a node. This was just a testnode to see if I could get it working in the first place. This way I could do the crossfading video loop internally in a node, but still do it hardware accelerated. And because it’s rendering 2 textures in one pass, it should be slightly faster too, because the quad only has to go to the graphics card once.
Thanks again, I’m one step closer to what I want to be able to do :-)