Sunday, March 17, 2013

Merging multiple DVI signals using an FPGA

A small followup to my previous post. Inspired by an article on Hackaday I did some more research on the use of FPGA to manipulate DVI streams.

It seems this has already been done. I came over this project at the Institute of Visual Computing in Germany. Direct link to pdf here. They talk about several different methods to parallelize rendering of a scene. "Sort-first" is a way of splitting the image into multiple parts (quadrants) and have different computers render each one - then combining it with a custom FPGA.

The downside is that their solution seem to require genlocked cards. This would require professional GPU's like the Quadro line from NVidia which are not exactly the best for gaming and also carries a high cost. On the other hand, there is enough RAM on the FPGA that they can handle +- 2 lines difference in sync using a small buffer/FIFO. The main difference to what we need is that we have both GPUs mounted in the same PC. That way, when we tell the 3D engine to start sending the two frames rendered in each GPU they will start at the same time - more or less. Hopefully this will let us get away without using genlock, but that's still just a theory. Running high-rez at 60Hz the timing will be tight anyway.

Update:
It seems the 618 series DVI comparator from Colorado Video has the capability to do what we require. Unfortunately they do not mention what the max resolution is, if you need genlocked signals or what kind of latancy (if any) the unit has. At $1390 it's not exactly cheap either. Good to know someone has done it though.

No comments: