A small followup to my previous post. Inspired by an article on Hackaday I did some more research on the use of FPGA to manipulate DVI streams.
It seems this has already been done. I came over this project at the Institute of Visual Computing in Germany. Direct link to pdf here. They talk about several different methods to parallelize rendering of a scene. "Sort-first" is a way of splitting the image into multiple parts (quadrants) and have different computers render each one - then combining it with a custom FPGA.
The downside is that their solution seem to require genlocked cards. This would require professional GPU's like the Quadro line from NVidia which are not exactly the best for gaming and also carries a high cost. On the other hand, there is enough RAM on the FPGA that they can handle +- 2 lines difference in sync using a small buffer/FIFO. The main difference to what we need is that we have both GPUs mounted in the same PC. That way, when we tell the 3D engine to start sending the two frames rendered in each GPU they will start at the same time - more or less. Hopefully this will let us get away without using genlock, but that's still just a theory. Running high-rez at 60Hz the timing will be tight anyway.
Update:
It seems the 618 series DVI comparator from Colorado Video has the capability to do what we require. Unfortunately they do not mention what the max resolution is, if you need genlocked signals or what kind of latancy (if any) the unit has. At $1390 it's not exactly cheap either. Good to know someone has done it though.
Sunday, March 17, 2013
Thursday, March 7, 2013
Maximizing Performance for VR
Like everybody else that took part in the Kickstarter I'm eagerly waiting for my Oculus Rift. Latest news is that they will start shipping kits later this month so hopefully the wait will be over soon!
While we wait there's a lot of time for speculation. For example; how will it perform with modern games? Doom 3 BFG is after all not the most taxing game any more. Most modern graphic engines require a pretty beefy hardware setup - both on the CPU and GPU side. Even then you aren't guaranteed a minimum of 60 fps all the time. When wearing a HMD like the Rift anything below 60 fps could ruin the immersive experience as a result of stuttering.
As the screen of the Rift developer kit only has a resolution of 1280x800 pixels it's not bad as for pixel fill, but it can still tax the computer when rendering two eyes/cameras. Nobody yet know how pixel-heavy the consumer version will be, but 1920x1080 is not totally unlikely.
So how do we ensure a high and consistent frame rate? First of all; turning off vsync would be a bad idea since tearing will destroy the immersion just as much as lower frame rates. Turning off high detail, anti-aliasing, shadows etc would for sure make the games run smoother, but where's the fun in that? Also; as the Rift has relatively low horizontal resolution per eye (1280 / 2 = 640 pixels) you'll probably need some anti aliasing to take care of hard edges. This is something that has already been confirmed by people that has tried out early versions of the Rift.
Obviously adding one or more graphic cards would help, but it's not that easy. SLI or CrossFire are not good solutions when generating VR related imagery since they both add 1+ frame of latency depending on the mode used. That's 16ms at 60Hz which automatically puts us above 20ms latency since rendering one frame takes 1 frame already. They can also in some cases suffer from other artefacts.
Latency, in combination with frame rate, is a big factor since you'll easily develop motion sickness if the computer generated world you perceive through your eyes does not match up with the actual movement your head performs and your middle ear detects. Both Michael Abrash of Valve Software and John Carmack at ID Software have written at length about latency and suggests ways to at least partly mitigate it. Both articles are well worth a read. The main goal would be to get latency below 20ms as anything below that would not be noticeable for most of us.
What we need is a way to render the left and right eye on separate graphic cards. Having the game engine support this, in comparison to rendering both views through one card shouldn't be too hard. Back in the "old days" we would do this when working with stereoscopic content - since we had to use two projectors with polarized glass to show the content so it's not that different. The difficult part is to merge them back together into one signal - as that is what the Rift expects.
A custom FPGA could be the solution. I'm no expert, but since we won't manipulate the signal in any way - the computational requirements shoudn't be too bad - especially for the resolutions we are talking about for the developers kit.
Since DVI, as I understand it, don't send the whole frame as a packet, but streams each pixel as if it were an analog video signal, we should be able to mix, or rather switch, the signals using the FPGA in real time. This is important since we don't want to introduce any more latency to the pipe. Since the left and right image will be located at each halve of the frame respectively we would simply tell the FPGA to switch between the two signals when passing the middle of the screen and back again at the right edge of the screen.
There are, however, a couple of challenges with this solution, the most prominent being sync (or genlock/framelock in TV-lingo). We would need to ensure that each pixel on the screen, be it for the left or right eye, scans out at the precise same time or else we would get artifacts in the form of slipping scan lines and skewing of the image.
Taking it one step further; to reduce the latency on the computer/GPU side if things one might even be able to have the computer render frames at 120Hz, but only let half of them through the FPGA - giving the display 60Hz - which it can handle. Although that would probably require some kind of buffering since the clock signal of the input would be half of what the display requires.
I'm making quite a few assumptions here, but it would be very interesting to test this out properly. I wonder if theres's time to learn VHDL before my Rift arrives?
While we wait there's a lot of time for speculation. For example; how will it perform with modern games? Doom 3 BFG is after all not the most taxing game any more. Most modern graphic engines require a pretty beefy hardware setup - both on the CPU and GPU side. Even then you aren't guaranteed a minimum of 60 fps all the time. When wearing a HMD like the Rift anything below 60 fps could ruin the immersive experience as a result of stuttering.
As the screen of the Rift developer kit only has a resolution of 1280x800 pixels it's not bad as for pixel fill, but it can still tax the computer when rendering two eyes/cameras. Nobody yet know how pixel-heavy the consumer version will be, but 1920x1080 is not totally unlikely.
So how do we ensure a high and consistent frame rate? First of all; turning off vsync would be a bad idea since tearing will destroy the immersion just as much as lower frame rates. Turning off high detail, anti-aliasing, shadows etc would for sure make the games run smoother, but where's the fun in that? Also; as the Rift has relatively low horizontal resolution per eye (1280 / 2 = 640 pixels) you'll probably need some anti aliasing to take care of hard edges. This is something that has already been confirmed by people that has tried out early versions of the Rift.
Obviously adding one or more graphic cards would help, but it's not that easy. SLI or CrossFire are not good solutions when generating VR related imagery since they both add 1+ frame of latency depending on the mode used. That's 16ms at 60Hz which automatically puts us above 20ms latency since rendering one frame takes 1 frame already. They can also in some cases suffer from other artefacts.
Latency, in combination with frame rate, is a big factor since you'll easily develop motion sickness if the computer generated world you perceive through your eyes does not match up with the actual movement your head performs and your middle ear detects. Both Michael Abrash of Valve Software and John Carmack at ID Software have written at length about latency and suggests ways to at least partly mitigate it. Both articles are well worth a read. The main goal would be to get latency below 20ms as anything below that would not be noticeable for most of us.
What we need is a way to render the left and right eye on separate graphic cards. Having the game engine support this, in comparison to rendering both views through one card shouldn't be too hard. Back in the "old days" we would do this when working with stereoscopic content - since we had to use two projectors with polarized glass to show the content so it's not that different. The difficult part is to merge them back together into one signal - as that is what the Rift expects.
There are, however, a couple of challenges with this solution, the most prominent being sync (or genlock/framelock in TV-lingo). We would need to ensure that each pixel on the screen, be it for the left or right eye, scans out at the precise same time or else we would get artifacts in the form of slipping scan lines and skewing of the image.
Taking it one step further; to reduce the latency on the computer/GPU side if things one might even be able to have the computer render frames at 120Hz, but only let half of them through the FPGA - giving the display 60Hz - which it can handle. Although that would probably require some kind of buffering since the clock signal of the input would be half of what the display requires.
I'm making quite a few assumptions here, but it would be very interesting to test this out properly. I wonder if theres's time to learn VHDL before my Rift arrives?
Labels:
Oculus,
Rift,
virtual reality,
vr
Location:
Sagene, Oslo, Norway
Subscribe to:
Posts (Atom)