I’ve managed to establish how to do this (without having to talk to the NVAPI directly). The magic is that the data only needs to be generated at half the eventual framerate (60Hz), as the graphics card generates the 120Hz video signal directly.
It basically boils down to generating a backbuffer which is twice the width of the display, and the same height as the display plus one pixel. The two images are arranged side-by-side (like a regular stereo image), and the extra pixel row (at the bottom of the image) is populated with special tag data (see the link) that tells the graphics card directly that it is a 3D image. This tag is a string of hex values that equates to a block of 20 pixels (put in the bottom left hand corner of the buffer).
I’ve got all this going inside a nice little pixelshader, and running my Renderer in fullscreen mode correctly activates the shutter glasses (via the 3D tag). The problem is that in fullscreen mode the Renderer’s max resolution is the same as the display (for obvious reasons); for this system to work the Renderer needs to be able to render to an offscreen surface (code is in the link above).
Can the Renderer (EX9) node do this? Or do I have to write a separate plugin to write to a buffer that size (and display it)?
I have set the backbuffer to the right size (using the appropriate pins on the Renderer (EX9) node), but in fullscreen mode the Renderer defaults to a set resolution and appears to ignore those settings…
EDIT:
Turns out that the renderer resolution in fullscreen is not the problem, just that I’m unable to render to offscreen regions. I can only assume that there is no real way of fixing this (without compiling a Renderer (EX9.3DVision) node that specifically supports offscreen surfaces…)
are you sure, that you have to do a 3d specific rendering yourself? i still think, that such a 3d system should be able to render that by default or with a certain driver setting.
however, an offscreen surface would be a DX9Texture.
hi ColourOfDarkness , great work there!
i have a question for you, does those 3d glases works only at 120hz ?
it would be nice to make them work at 60hz to make it compatible with any monitor / projector … it seems that you are alone here on this one, so god luck! :)
Tonfilm; the reason I’m trying to do it this way is that I’m not using a true 3D geometry to generate the backbuffer data (my data can be treated as a stereoscopic image pair). But you are right, with an appropriate depth buffer the NVidia drivers are capable of producing 3D output with no user intervention.
vjc4; the glasses are capable of running at 100, 110 and 120Hz (from what I can gather from the driver setup; these other modes are used to cancel flicker from AC lighting circuits). I believe 30Hz per eye shouldn’t produce noticeable flicker (ie using a 60Hz display); the problem is the NVidia drivers will only enter 3D mode when a compatible display is connected, so that framerate is kind of exculded by design I’m afraid!
I’ll keep going and keep you guys updated!
EDIT:
Further to your question vjc4, the switching process in the glasses relies solely on the signal from the IR emitter (they have no internal clock), so (in principle at least) they can run at 60Hz. The problem, as before, is that the NVidia back-end is not designed for this…
did you try via a DX9Texture as tonfilm suggested?
you can set its resolution to anything you like and then place that texture on a fullscreen-quad in a second renderer. not sure if this is really it, but definitely worth a try.
if they don’t have an internal clock, it would be “easy” to clone the signal from the emiter… but i guess they shoud have an internal clock and the emiter shoud be sending like a “sync” signal, if you block the emitter for a few seconds does the shutting process stop ? that would give me a chance without modding the gases itself.
The DX9Texture method as yet produces no different results to my previous method (using a pixelshader); however there may still be hope.
From the link I posted initially, I’ve downloaded and modified the code to run on my machine (the last guy had written it with display formats that aren’t valid on my laptop, so I had to include a bucket load of checking code to get it working!) and thus far it still produces the same results as 4V; I see the stereo image pair side by side on the display, with the shutter glasses activated. If I can get their code to run I think I should be able to translate it into a 4V plugin… hopefully!
On that last note, would it be possible for someone to point me in the right direction? I can’t work out which interface type is most appropriate (IPluginDXDevice?)… If anyone could produce a plugin template that just mimics the current Renderer (EX9) node then I’d be forever grateful! :D (or even one which just creates a blank render window!)
So I tried contacting NVidia directly (through their forums) to see if I could get any information on the issue; I haven’t had any response or acknowledgement in over a week.
It appears (though this is unconfirmed) that the functionality I was trying to make use of was removed from the later builds of the 3D Vision driver; that’s why it hasn’t been working! (So nothing has been 4V’s fault xD)
The only way I can see of getting this to work is to render using page flipping, and somehow control which eye texture is displayed in a hardware-locked kind of fashion… I believe that this is the method employed by 3DTV’s Stereoscopic Player, although I don’t have much evidence to back that up! Once this is working, the NVAPI can be used to force the system into 3D mode (by calling NVAPI_STEREO_ENABLE() using a simple plugin)…
Is it still possibleto write 4V plugins in c++? The docs seem to have moved as far as I can tell… and the NVAPI isn’t built for C#! :D
I did look into that, unfortunately my hardware is so new that drivers that old don’t exist for it ;P Although it should work for someone with appropriate hardware!
ColourOfDarkness i have 2 ideas…
how do you enable the 3d content with the sutter glases ? i mean, you have to be on full screen to do it ? or you can use the pc and the content on a windowed way… this is to use vvvv at the same time and have the glases working…
if you cant, use another pc with the emitter and look at your 120hz monitor with vvvv… check the attach file to get my idea…
after adding the lfo output - mainloop im sure this wont work because if you drop ony 1 frame it will loose sync, so this is my 2nd idea…
i think that modding the ir its an option…
if you have electronic skills try the following… disable a cheap pc microfone, remove the microfone and put an ir led … connect that to you mic jack on the pc… you will need to check the it led receiver polarity…
open any audio recording software. record the signal at the maximun framerate
then you will need to “play” it back to see if this works, adding an ir emmiter to the output wont work, you will need to make a close circuit between a 1:1 transformet , one one side the signal from the computer output, on the other side, the ir led in series with a power supply… this will add the signal to the ir led… if this works… you will need to check the audio file to see if you can remove half of the signals and to see if you can drop down the signal to 60 hz… but this is to make the glases to work on a standar monitor at 30fps 3d ( 60hz ) , still it wont solve the original problem because 1 drop frame will cancel the effect…
So after trying to describe to someone else how the drivers work the solution seemed rather obvious…
The way the NVidia drivers generate the two eye views in games etc that aren’t specifically coded for 3D is that they perform the two perspective projections, using the camera transforms supplied by the software (ie. Camera)), on the data in the depthbuffer. This means that any geometry that is supplied to the display (in fullscreen mode only on the consumer NVidia cards) will be automatically rendered in glorious 3D by the driver, using the default settings under the NVidia Control Panel.
So (for my purposes at least) it is simple; connect an appropriately configured Camera to a fullscreen Renderer (EX9) with any variety of depthbuffer and let the good times roll :D
I don’t know how many other people out there using 4V have NVidia 3D hardware, but if there is any interest I will attach a small patch containing the basic settings that work for me…
vjc4; I have studied a course in Electronics as part of my degree ;P and what you’re saying makes sense… There is a thread on MTBS where someone has basically done what you suggested, and has posted a full set of timing diagrams for anyone that wants to build their own IR emitter;
The same guy wrote another thread on a similar vein, using the NVidia system and custom electronics to sync several types of shutter glasses to a DLP projector (but I can’t find the link right now!)
nice to read that it just works with the right cam/renderer settings.
anyways, would be nice to be able to control the left/right image by vvvv to do some experiments with abstract images…
The code snippets are in c++, but should be easily portable to c# (and therefore a 4V plugin!), but I haven’t looked into this in any depth yet.
A couple of things might be important for people that want to try this (please correct me if I’m wrong!);
This method may be outdated, as its possible NVidia changed the method of interfacing with their emitters since that presentation was published
I believe that the drivers for the emitter need to be installed, and I’m fairly confident (if I remember right) that this requires a 3D Vision-compatible graphics card…
The 3D Vision glasses are based on LCD technology, and are therefore polarized; the special polarization is another way NVidia and display manufacturers prevent the system being used on ‘non-3D’ LCD displays… although that isn’t a problem with CRTs! I’ve found this with my (non-3D) laptop; the glasses are at extinction with my display normally, and I need to rotate my head 90 degrees to see anything xD
i thought 3d-displays are only special in that they are capable of >100 Hz.
if you’d want to supply 120Hz with vvvv (which is quite a task in itself and possible only for very slick scenes) you would not even need a nvidia card marked for 3d, because you don’t need the drivers that render every scene from 2 viewpoints.
i have no clue, what this 90° head angle is supposed to be. never experienced anything like that oO
Does this mean you can’t apply 2d post processing to the 3d image (as the nvidia driver is grabbing the video straight from the renderer with the camera) ?..