<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/digital_time_tunnel_small.jpg” alt=”” width=”92″ height=”92″ />The supercomputing community tends to think in 1000X increments – gigaflops, teraflops, petaflops, and soon, exaflops. It’s all about hardware performance. But if we really want HPC that’s a thousand times better than what came before, advances will have to come from sources beyond just servers and CPUs.
In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions. This puts a new kind of pressure on the memory subsystem.
Manycore CPU headed to universities and research institutions.
On Wednesday Intel shifted its Tera-scale Computing Research Program into second gear by demonstrating a 48-core x86 processor. The company is intending to use the new chip as a research platform for the purpose of lighting a fire under manycore computing.