im trying to render high-resolution images from my vvvv patches with the writer (texture.grid) module. this works fine as long as i use dx9 or ex9 with shaders with directional light.
as soon as i use point-light shaders, especially the phong-point, it is no longer possible to render small parts of the camera view in high resolution and stitch them together again seamlessly.
i thought the problem was, that the camera actually moves, when you use a transform on the projection transform of the renderer, which would of course lead to a different reflection of the light for each part of the grid.
i tried to rewrite the writer(texture.grid) subpatch with transform nodes, that i think do not change the cameras position, but only view into a different part of the view matrix. i used Perspective(off center) and the LookAt trasform nodes - but it didnt solve the problem. apaprently, the refelctivity of the material surface depends also on the zoom of the camera … does this mean it is impossible with certain shaders to render the view in several rows and colums?
didnt ampop focus on this for his diploma?
here is my last writer-subpatch and a sample picture of the problem …
you could try to render the image in a grid where the cells are only one pixel high. The reflectivity wouldn’t be the same as in a smaller preview, but at least you wouldn’t see the edges of the speculars like this.
But I don’t really understand why the reflectivity should be dependent on the FOV of the camera, unless you move the camera or the light source or the object. However, ‘I am not a lawyer’.
yeah, i also think that its a shader problem … after all that ive tried. unfortunately i cannot debug it myself … ill try the earlyer phong, but i dont remember a phong with a positional light. directional phong is okay anyways …
and max, yes, i thought of that possibility already, but i dont know any tool to stitch the pixels together again. maybe with a texture queue and a second render pass …
you can see the effect, when you switch between two different FOVs on the perspective node. filter the switching, and you will see how the reflection changes when zooming in and out.
have a look at the multiscreen module. on my laptop only the fallback technique works, however the shader should work the same way, cause it is working in viewspace.
the trick should be to have one view transform and many projection transforms. when a shader calculates the lightning in view space the projection transforms shouldn’t have any influence on the output (concerning the lightning).
i reworked the standard writer module some weeks ago and it really should also work. the only thing you should pay attention on is that you seperate between view and projection when connecting to the module.
yes, i completely understand, that you have to separate between view and projection transforms. first i thought that was the problem, but after i repatched the writer module, i found out that its not. on my machine also the phong point technique works, and i get exactly the same result like the one i posted originally. i would use the directinoal shader, if i didnt want to have the specular light effects of the positional …
heres the result of your patch …
it must be the shader, that takes some information from the view matrix instead of the projection matrix.
and another question
do i get an antialiased texture, when i leave the backbuffer dimensions at zero and set an anti-aliasing level for windowed rendering?
does the dxteture copy the texture that is displayed in the window, then?
only possibility i can think of is to render a not antialised texture (puzzle part as big as possible) and then sample it in the next render pass. (with a shader which does the minification)
however this could lead to not so nice artefacts at the borders between the puzzle parts. maybe there is a way to render a little overlap (see multiscreen module) and afterwards cut those borders away?