New Intel Chip Not GPU

Intel fab workersMany reports describe Intel's new multi-core chip design, code-named Larrabee, as a GPU, or a "graphics processing unit" intended to be the graphics equivalent of a CPU. But that may be missing the big picture. The company seems to be working on a new approach to chip design that maximizes processing flexibility. I'm wondering if Intel is trying to produce a chip that could effectively morph itself into whatever computationally-intensive thing it needs to be, giving the company the biggest competitive hammer it has had in a long time.

Many call Larrabee a GPU because that's the market that Intel plans to first address. And in one sense, it shouldn't be remarkable; the chip uses a bunch of cores and permits parallel processing, which isn't new in the graphics world. It's been clear for some time that there's only so fast you can clock a chip and not have it burn out under the tremendous amount of heat it has to dissipate. But what is different about this chip is how it controls the processing.

Figure 3 shows Larrabee's approach, in which most of the pipeline is implemented in software that runs on the general-purpose x86 cores. What this means is that the sizes and even order of the rendering stages aren't fixed, so the rendering pipeline can be reconfigured dynamically from one moment to the next to match the game engine's needs by reallocating hardware resources. Steps can be skipped entirely if they're not needed, and other steps can be beefed up with extra hardware.
That to me sounds like something capable of reconfiguring itself on the fly, like a set of building blocks that get reassembled into whatever sort of structure you need at that time.

According to analyst Jon Peddie, there's room for a third graphics chip company other than ATI and NVIDIA if the player is Intel. But that's playing third fiddle, not second, and hardly the way the chip giant approaches its business. Consider what Intel has indicated about its plans to use ray tracing:

"We believe a new graphics architecture will deliver vastly better visual experiences because it will fundamentally break the barrier between today's raster-based pipelines and the best visual algorithms," said [Justin Rattner, Intel's chief technology officer]. "Our long term vision is to move beyond raster graphics which will make today's GPU technology outmoded," he said.
To pull this off in the time frames that gaming requires means a scary amount of intensive math in a short amount of time that could also do such things as intensive modeling and simulation, scientific and engineering calculations, encryption and decryption, networking, communications, and many other things.

Again, I might be barking up the wrong tree, but if you could shift around the cores, each of which understands the broadly-used x86 instruction set, you could have the general computing equivalent of a digital signal processor, with software restructuring how the chip behaves and what it appears to be. Suddenly you have a competitively scary beast â€" make the same basic set of chips, using IP you already have, and because it's only the software that makes the essential difference between uses, you get huge economies of scale.

If you're a specialized chip maker, how would you compete against Intel if it could provide a bit of code that would turn something from its assembly line into a strong competitor for your product? Or, as Intel said in the paper it will be presenting at SIGGRAPH on Aug. 12:

The Larrabee native programming model supports a variety of highly parallel applications that use irregular data structures. Performance analysis on those applications demonstrates Larrabee's potential for a broad range of parallel computation.
Sounds like a silicon gauntlet hitting the ground to me.

Fab workers image courtesy of Intel