The Intel Terascale project was first introduced back in March, 2006. Intel presented an 80-core processor running at speeds up to 5.8 GHz, with a performance of 1.01 TFLOPS at 3.16 GHz (62 W). When they turned up the clock frequency the performance scaled up to 1.81 TFLOPS at 5.7 GHz, but the power consumption also jumped up to 265 W. Over at TG Daily they’ve had a chat with Jerry Bautista, Director of Technology Management at Intel, and discuss the eight paper technical papers Intel released to the public some time ago.
The news article is not very long, but it has some interesting information on almost every aspect of the 80-core Teraflops processor. For instance, Intel used already existing components to create the logic of the new processor. Memory controller, arithmetic units, and routers all come from older products, with some minor tweaks.
We also find out that the tile design isn’t restricted to using precisely 80 cores, but can use almost any amount of cores in any kind of arrangement, they don’t even have to be symmetric. The reason Intel chose 80 is because this would be enough to prove the concept. In fact the cores don’t even have to be identical.
The internal communication is not core-to-core all throughout the processor, but the processor is actually divided into nodes of 8 cores each. The cores within the node can communicate directly with each other, but when they need to communicate with cores of other nodes they have to send information node-to-node.
Bautista goes deeper here and explains that not even the nodes have to be homogeneous, but can contain all kinds of logic, e.g. parallel FP engines. This makes it possible for Intel to create incredibly versatile processors, capable of a large variety of individual tasks.
The article goes on to discuss the quite ingenious routing and self-correction abilities, which actually makes it possible for a processor to keep working just fine even if one of the cores starts to fail.
“Tera-scale is a highly flexible base platform design, which has the potential to lead the way to many more possibilities. The technology can allow massively parallel operations in a MIMD model, which is “Multiple Instruction, Multiple Data”. It can do this using a traditional multi-core model or through the addition of specialized cores.”