Hey everyone
I’ve got some general discussion points about DX performance in VVVV:
1. Which should be better:
- Geforce 8800 - Monitor 1680x1050, DualHeadToGo 768x2048, or
- Geforce 8800 - 768x1024,768x1024, Geforce 8400 - Monitor 1680x1050
So in this situation we’ve got 3d objects with shaders running on the 2*786x1024 heads (being fed to projectors).
And similar things being rendered to the monitor, but with different shaders.
Essentially, what’s the associated performance hit of
- Splitting worlds across graphics cards
- Using abnormally large render windows (as seen in Tirple/DualHeadToGo usage)?
2. Is the preparegraph done by the CPU or the GPU?
The preparegraph seems to refer to creating the transformation matrices to be fed into the shader. When this becomes a significant performance hog, it should in principle be possible to move some transformations over to the shader, so they can be performed on the graphics card.
Can anybody confirm that the preparegraph refers to making the transformation matrices? i presume it will include other activities as well.
3. What is “present”??
The “Timing (Debug)” node has an output named “Present Time”. What does this refer to? I seem to be loosing 1/3 of my time to the present