I am training my 3D mapping skills with a couple of pyramids and a cubes I ve built. 1 projector is a no brainer but now I moving forward and since I haven t got any Matrox multiplier I am boygrouping another computer so I can use 2 projectors and project over all the different faces of the objects.
Now I am boygrouping part of the patch except the final render, the projector node and its transform and the camera softimage for the monitor/3d scene.
My question is: is there a way to create a softedge between the 2 projections, as in the Multiscreen node but on the 3D projection?
i’m having the same question.
there is this patch by west over here.
that does the 3d masking but no softedge applied.
maybe a renderpass hack with a homography gradientmask i’m not sure
it would work for 3dcontent.
I have the 2 laptops now working nicely together and ready to test the patch on real life, boygrouping was relatively easy to setup although still I have something to understand about where to place external files.
About the softedge, that, by the way is not a softedge at all, at least technically.
So we have Render 1 with Projection Matrix 1 and Render 2 with Projection Matrix 2.
On the first projector we can freely send Render 1.
Then we should take Render 1 with Projection Matrix 2 and subtract it from Render 2 with Projection Matrix 2, the result should be sent to the second projector.
EDIT: I have this weird feeling I ve just been talking no-sense…
I’m actually making a shader for that this week. Can you sit tight for a little bit?
The concept is that you define regions for each projector using transform
matricies (one per projector)
the region is 3D (as opposed to 2d like with a softedge, which is a mask on the output image).
The other method for mapping across projectors is just to select 1 projector per each one of your surfaces. So if a
projector is selected to project onto a surface, then that surface is set to colour , otherwise black. ie. rgba(0,0,0,1), NOT rgba(0,0,0,0)
Me sitting tight for a little bit? not a chance !
Re-elaborating my blurbing on the last post, the idea is that assuming we send the first render to the first projector as it is, on the second projector what we need would be the scene as projected by the first projector but seen from the 2nd projector position… i think we mat be talking about ray tracing then as mentioned by West
having this sorted out the result should be subtracted from the second render. And still no soft edge.
it’s certainly a tough one. We talked about a similar situation a while back. Wasn’t talking softedge blending though.
To do a softedge blend on some geometry i’d do a hack the visual calibration part out of the multiscreen node, apply it to geometry with the grid editor and then add blend this over the top of my content. Not the quickest of methods but with a good eye could get it close to perfect.
i get what you mean about rendering a depth buffer from each projectors perspective, then using that as a mask against the other projectors.
so you need one shader to output depth buffer from projector A
then another shader that projects that depth buffer out, and applies a brightness mask which is determined by that projection vs the actual depth of of that pixel in the projection.
then if it’s outside of a threshold (i.e. you presume it’s not the same surface) then you set a brightness high on the output.
most of the mapping routines i use now though dont use projection matricies which would make it more complicated (or at least matricies with no regard for the real world). which may make it more complicated. got a new method i’m working on which should render a real projection matrix (like the good ol’ projector node), might give it a try when i do…
anyway, for this project we’re presuming equal illumination from the projectors on the surfaces, so our edge blend is just acting on space coordinates. so the shader i’ll make this week might not be of use to you
If I understand correctly what you want this doesn´t exactly do it
wow, that’s a really nice patch
it’s a different challenge though
that’s for 360 curved, with the projectors on the inside.
whereas we’re talking complex geometry, generally with the projectors on the outside
ok. so i’ve opened a new kimchips research project which is ‘brightness assignment’. hope to have some results to show by the end of the week (otherwise i have some unhappy clients!)!
I think ?gregsn? once did a very very sophisticated shader for exactly this but I don´t know if it is (or can be made) publicly available.
yesterday night I finally started the 2 projector test but unfortunately I ve spent 80% of the time fighting with Windows LAN setup. It all worked fine at home but then no way to make it connect smoothly by the place where I am setting up the thing. Also, my fault for relying on wi-fi connection on one of the 2 laptops.
Hopefully I ll bring home some results tonight.
so, 2 facing projectors over multiple objects has proven to be over my skills, the calibration of the 2 projectors evidently needs some special way of getting it perfectly right.
One single object is a no brainer tough, since you can make little adjustments to make the scene fit, here some videos:
at the same time also 1 projector over multiple objects is pretty straight forward:
I hope that with a bit of practice and patience I ll be able to calibrate properly the 2 facing projectors.
Has anybody got any special trick to help with this task? I was thinking to lay on the floor a sheet of paper with a grid on it to help get the y=0 plane first, then maybe with a big squared box. Any idea?
Ok, so I’m still working on brightness assignment.
It looks good for 2 projectors, but still trying to figure out an elegant way of handling more.
But here’s something else for the time being…