Visit additional Tabor Communication Publications
November 11, 2010
Christopher Mims has written a two-part blog at MIT's Technology Review concerning the historic announcement that China with its 2.5 petaflop machine, Tianhe-1A, had pulled ahead of the US in the worldwide supercomputing race.
In the first part, Mims makes the case for why China's new supercomputer is only technically the world's fastest. It probably comes as no surprise that this has to do with the way the machine's peformance is measured, à la Linpack benchmark, the test used to officially determine the speed of the world's fastest supercomputers. It measures a computer's ability to perform calculations in short bursts, but in the real world of scientific computing, sustained performance is a more meaningful designation.
"In other words," says Mims, "the Tianhe 1A comes on strong, but American supercomputers can last all night or sometimes many days, depending on the scale of the problem they're tackling."
A distinction in peak processing power is not a predictor of sustained performance, and, according to Mims, the NVIDIA GPUs in the Tianhe-1A are not so great at the latter. With GPU-based systems, there's a memory bottleneck that leaves the GPUs sitting idle much of the time.
Thom Dunning, director of the National Center for Supercomputing Applications, is straightforward in expressing the conundrum of the Linpack benchmark:
"The Linpack benchmark is one of those interesting phenomena -- almost anyone who knows about it will deride its utility. They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."
Mims questions whether engineers working on Tianhe-1A will be able to create scientific software that can take advantage of the machine's peak performance by rarely accessing memory. And parellelizing the code to work with GPUs is perhaps an even greater challenge, one that has stymied programmers in the West.
Says Mims: "It's not clear that the Linpack benchmark which pegs the machine as the world's fastest is a useful indicator of its performance in real-world applications."
The second part of Mim's one-two punch is that the US is already developing a system that is on track to becoming the world's fastest supercomputer in 2012. Mims explains that because so much time goes into the development process of the highest-end supers, with long planning, design and implementation stages, generally the experts in the field can predict with some degree of certainty which systems will be game changers and to some extent how they will measure up to one another. This is why Mims says that "it's possible to predict with some confidence the world's fastest supercomputers -- even, perhaps, the single fastest supercomputer -- in the year 2012." According to Jack Dongarra, the keeper of the TOP500 list of the world's fastest systems, there are five such systems in line to topple Tianhe-1A's standing.
One of these potential upstarts is Blue Waters. What sets this machine apart is that it will be powered by the latest IBM Power chip, the Power 7, and will sport a superfast interconnect with greater bandwidth and lower latency than previous incarnations.
According to Dunning, Blue Waters will be installed at NCSA sometime in the first half of 2011 with production ramping up in the Fall of 2011. In 2012, the Blue Waters will be up and running a full range of scientific applications.
Full story at Technology Review
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.