While we wait there's a lot of time for speculation. For example; how will it perform with modern games? Doom 3 BFG is after all not the most taxing game any more. Most modern graphic engines require a pretty beefy hardware setup - both on the CPU and GPU side. Even then you aren't guaranteed a minimum of 60 fps all the time. When wearing a HMD like the Rift anything below 60 fps could ruin the immersive experience as a result of stuttering.
As the screen of the Rift developer kit only has a resolution of 1280x800 pixels it's not bad as for pixel fill, but it can still tax the computer when rendering two eyes/cameras. Nobody yet know how pixel-heavy the consumer version will be, but 1920x1080 is not totally unlikely.
So how do we ensure a high and consistent frame rate? First of all; turning off vsync would be a bad idea since tearing will destroy the immersion just as much as lower frame rates. Turning off high detail, anti-aliasing, shadows etc would for sure make the games run smoother, but where's the fun in that? Also; as the Rift has relatively low horizontal resolution per eye (1280 / 2 = 640 pixels) you'll probably need some anti aliasing to take care of hard edges. This is something that has already been confirmed by people that has tried out early versions of the Rift.
Obviously adding one or more graphic cards would help, but it's not that easy. SLI or CrossFire are not good solutions when generating VR related imagery since they both add 1+ frame of latency depending on the mode used. That's 16ms at 60Hz which automatically puts us above 20ms latency since rendering one frame takes 1 frame already. They can also in some cases suffer from other artefacts.
Latency, in combination with frame rate, is a big factor since you'll easily develop motion sickness if the computer generated world you perceive through your eyes does not match up with the actual movement your head performs and your middle ear detects. Both Michael Abrash of Valve Software and John Carmack at ID Software have written at length about latency and suggests ways to at least partly mitigate it. Both articles are well worth a read. The main goal would be to get latency below 20ms as anything below that would not be noticeable for most of us.
What we need is a way to render the left and right eye on separate graphic cards. Having the game engine support this, in comparison to rendering both views through one card shouldn't be too hard. Back in the "old days" we would do this when working with stereoscopic content - since we had to use two projectors with polarized glass to show the content so it's not that different. The difficult part is to merge them back together into one signal - as that is what the Rift expects.
There are, however, a couple of challenges with this solution, the most prominent being sync (or genlock/framelock in TV-lingo). We would need to ensure that each pixel on the screen, be it for the left or right eye, scans out at the precise same time or else we would get artifacts in the form of slipping scan lines and skewing of the image.
Taking it one step further; to reduce the latency on the computer/GPU side if things one might even be able to have the computer render frames at 120Hz, but only let half of them through the FPGA - giving the display 60Hz - which it can handle. Although that would probably require some kind of buffering since the clock signal of the input would be half of what the display requires.
I'm making quite a few assumptions here, but it would be very interesting to test this out properly. I wonder if theres's time to learn VHDL before my Rift arrives?
No comments:
Post a Comment