When Intel officially released Knights Corner earlier this week the first thing that hit me was how is this going to affect 3D rendering. That being the most time consuming or machine intensive part of visual effects for TV or movies.
Sure there's been specialized hardware that speeds up this process before, but there's always been drawbacks like only working with one particular render-software. Probably not the one you favor. Then there's the GPU way which use CUDA or OpenCL to harness the massively parallel processor of your graphics card. Several render-softwares, like V-Ray and Mental Ray (iRay), supports GPU rendering to a varying degree, but the main issue here is that your video RAM limits what you can render. Most video cards today have between 1.5 and 6GB of DDR3 RAM which can handle simple scenes, but whenever your scene passes this limit you have to start swapping information to regular RAM and most of your speed gain will be lost. Geometry in itself is not that heavy, but start layering textures you'll soon run out.
So why is Knights Corner any better? At the bottom of it Knights Corner is an x86 part, only with 50 cores and from what I can glean from the press release you don't have to re-write any of your code to utilize it. Just enable a compile flag and re-compile your code and your good to go. This way we can get support in most of the major render-software easily - which is the first step. Sure there's going to be optimization issues - for example the ring-bus architecture which might not be optimal for this kind of processing.
I for one will be looking forward to next year when we might, or might not, get access to some actual hardware to test.