The last image you posted shows some evidence of the basic difference of quads facing the observers viewpoint (the one checked green) and quads staying in parallel to a viewport/screen plane (billboarded “sprite”, the one X’ed in red). The latter will hardly ever look good, the former solves problems of continuity between hard edges even with natural perspective (as you would have with every cubemap, btw, with an automatic fov of 0.25 each side).
For the quads, take a LookAt node, feed the quads’ position into Position, and for Target use the central point of the best user experience (which would be the singular position of your camera spread). The resulting Transform needs an Invert before going into the quad.
Not sure if this is your problem, so with a grain of salt, it is valid going to almost never think about “screen space” in a 360° environment. Try to mind the eyepoint of the user instead, the thing I like to call “Point of Immersion”.
It is not exactly clear, what you mean with cameras in your pics, so I presume they are not physical, but actually a spread of camera modules in your patch to represent physical projectors.
In case I am guessing right, don’t use those to capture the original 3d scene. Instead, make a cubemap or a texarray from a central location (point of immersion, right), and then use a second renderpass to project onto a virtual representation of your canvases to ease dealing with actual beamer settings independent from any sweet spot considerations.
Anyway, measuring your physical setup and adjusting your virtual representation seems key for any installation like yours.