Visit additional Tabor Communication Publications
April 27, 2007
Lately it seems like I've been talking with people who see the multicore phenomenon as something of a cluster-buster. One of those people is Mike Hoskins, CTO of Pervasive Software, a company that develops database software technologies. Hoskins' reading of the tea leaves suggests that the trajectory of multicore processors is on a collision course with cluster computing. Essentially, the rationale is that as cores multiply on the chip, it makes more sense to build and program scaled-up SMP machines than scaled-out clusters.
Hoskins hopes this is the case. In general, his world of data-intensive computing has never been comfortable with the cluster and grid model. The technology heritage in this arena is mostly C and Java apps running on mainframes or big servers. Clusters and MPI programming are seen as fringe technologies. The clusters themselves are hard to deploy and administrate, while the programming model is primitive and not well-supported for commercial application development.
For Hoskins, the path of least resistance to bring data-intensive and compute-intensive computing into the Java universe is through SMP architectures. This week's feature article on Pervasive's Java framework looks at how cluster and multicore technologies are viewed from someone outside the traditional HPC community.
Hoskins tells a convincing story. Although the average multicore processor today is a dual-core chip, soon that will be quad-core. If we just follow a Moore's Law curve, a standard general-purpose processor will have 16 cores by the end of the decade. If you put four of those processors in an SMP box, you essentially have a machine that matches or exceeds the performance of most workgroup and departmental clusters today.
Since the workgroup and departmental systems are the fastest growing segment in HPC, a switch to SMP boxes would change the profile of the market fairly quickly. If multicore SMP systems cannibalize the low end of the cluster market, it will force clusters into the higher-end (but lower volume) capacity computing space.
It's no coincidence that vendors like Azul and Sun, who are pushing the multicore envelope more than most, are also big proponents of scaled up SMP boxes. Azul's 48-core Vega 2 chip is being used in their 768-way Compute Appliance, while Sun's 8-core, 32-thread UltraSPARC processor populates their T1000 and T2000 servers. And just last week, Sun announced first silicon for their new 16-core Rock processor. Since quad-core currently represents the upper end of x86 processors, more general-purpose, scaled-up machines are still on the drawing board. But SGI's f1240 server already offers a 48-core x86 SMP, which can be expanded up to 96 cores.
Beyond 2010, we can extrapolate core doublings into a manycore future, eventually squeezing capacity clusters up against supercomputing capability systems, until ... poof, they disappear, never to be heard from again.
Or maybe not. Just as scaling nodes in a cluster has its problems, so does scaling cores and processors in a machine.
The biggest impediment to scale-up is the memory wall. Since SMP systems, by definition, share a common memory space, the data bandwidth into each processor, and then each core, is limited by memory system performance. As more cores compete for memory, each one has proportionally less bandwidth available to it. Memory technology isn't standing still, but RAM has only been doubling in speed every 10 years, well behind the 18-month Moore's Law doubling rate that is driving the multicore phenomenon. Technologies on the horizon to speed up memory access include 3D chip stacking (IBM), on-chip photonics (Intel) and proximity communication (Sun Microsystems). Whether any of these proves to be a practical solutions remains to be seen. But in the short term, the memory wall will act as a barrier to unconstrained SMP scale-up.
In addition, as you add more cores and processors to a system, system architects add additional RAM to keep computational performance balanced with memory capacity. But once you get up into terabytes of RAM, you have to start worrying about the likelihood of hard errors occurring with some frequency. Technologies such as memory scrubbing can deal with this, but the system cost is increased.
But the really big unknown is future HPC application demand for more performance. If applications that now run on low-end clusters don't change appreciably, the equivalent code will run on SMP workstations in a few years. But if those applications are limited by performance, they're likely to migrate to more powerful clusters as the nodes and interconnects ramp up in power.
Certainly in the bigger problems sets in HPC, like climate modeling or other types of large-scale simulations, the demand for more performance is almost insatiable. As you increase the time scales or resolutions of many models, the workloads scale relatively easily. But for commercial HPC applications, it's a mixed bag. Some problems are domain limited, for example, the genomic analysis of a bacterial pathogen. These types of applications don't scale. But many types of engineering simulations can scale as easily as climate models.
One thing did become clear to me in talking to Hoskins: There are users out there who would love to move into the high performance computing world, but are unwilling to migrate to cluster or grid computing because of the difficulty of the software model and the complexity of the system. For these people, multicore SMP systems are the answer.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - April 26, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.