Visit additional Tabor Communication Publications
November 15, 2010
ALBUQUERQUE, NM, Sept. 15 -- A new supercomputer rating system will be released by an international team led by Sandia National Laboratories at the Supercomputing Conference 2010 in New Orleans on Nov. 17.
The rating system, Graph500, tests supercomputers for their skill in analyzing large, graph-based structures that link the huge numbers of data points present in biological, social and security problems, among other areas.
"By creating this test, we hope to influence computer makers to build computers with the architecture to deal with these increasingly complex problems," Sandia researcher Richard Murphy said.
This small, synthetic graph was generated by a method called Kronecker multiplication. Larger versions of this generator, modeling real-world graphs, are used in the Graph500 benchmark. (Courtesy of Jeremiah Willcock, Indiana University) Click on the thumbnail for a high-resolution image. A higher resolution EPS file of the image also is available upon request.
Rob Leland, director of Sandia's Computations, Computers, and Math Center, said, "The thoughtful definition of this new competitive standard is both subtle and important, as it may heavily influence computer architecture for decades to come."
The group isn't trying to compete with Linpack, the current standard test of supercomputer speed, Murphy said. "There have been lots of attempts to supplant it, and our philosophy is simply that it doesn't measure performance for the applications we need, so we need another, hopefully complementary, test," he said.
Many scientists view Linpack as a "plain vanilla" test mechanism that tells how fast a computer can perform basic calculations, but has little relationship to the actual problems the machines must solve.
The impetus to achieve a supplemental test code came about at "an exciting dinner conversation at Supercomputing 2009," said Murphy. "A core group of us recruited other professional colleagues, and the effort grew into an international steering committee of over 30 people." (See www.graph500.org.)
Many large computer makers have indicated interest, said Murphy, adding there's been buy-in from Intel, IBM, AMD, NVIDIA, and Oracle corporations. "Whether or not they submit test results remains to be seen, but their representatives are on our steering committee."
Each organization has donated time and expertise of committee members, he said.
While some computer makers and their architects may prefer to ignore a new test for fear their machine will not do well, the hope is that large-scale demand for a more complex test will be a natural outgrowth of the greater complexity of problems.
Studies show that moving data around (not simple computations) will be the dominant energy problem on exascale machines, the next frontier in supercomputing, and the subject of a nascent U.S. Department of Energy initiative to achieve this next level of operations within a decade, Leland said. (Petascale and exascale represent 10 to the 15th and 18th powers, respectively, operations per second.)
Part of the goal of the Graph500 list is to point out that in addition to more expense in data movement, any shift in application base from physics to large-scale data problems is likely to further increase the application requirements for data movement, because memory and computational capability increase proportionally. That is, an exascale computer requires an exascale memory.
"In short, we're going to have to rethink how we build computers to solve these problems, and the Graph500 is meant as an early stake in the ground for these application requirements," said Murphy.
How does it work?
Large data problems are very different from ordinary physics problems.
Unlike a typical computation-oriented application, large-data analysis often involves searching large, sparse data sets performing very simple computational operations.
To deal with this, the Graph 500 benchmark creates two computational kernels: a large graph that inscribes and links huge numbers of participants and a parallel search of that graph.
"We want to look at the results of ensembles of simulations, or the outputs of big simulations in an automated fashion," Murphy said. "The Graph500 is a methodology for doing just that. You can think of them being complementary in that way -- graph problems can be used to figure out what the simulation actually told us."
Performance for these applications is dominated by the ability of the machine to sustain a large number of small, nearly random remote data accesses across its memory system and interconnects, as well as the parallelism available in the machine.
Five problems for these computational kernels could be cybersecurity, medical informatics, data enrichment, social networks and symbolic networks:
"Many of us on the steering committee believe that these kinds of problems have the potential to eclipse traditional physics-based HPC [high performance computing] over the next decade," Murphy said.
While general agreement exists that complex simulations work well for the physical sciences, where lab work and simulations play off each other, there is some doubt they can solve social problems that have essentially infinite numbers of components. These include terrorism, war, epidemics and societal problems.
"These are exactly the areas that concern me," Murphy said. "There's been good graph-based analysis of pandemic flu. Facebook shows tremendous social science implications. Economic modeling this way shows promise.
"We're all engineers and we don't want to over-hype or over-promise, but there's real excitement about these kinds of big data problems right now," he said. "We see them as an integral part of science, and the community as a whole is slowly embracing that concept.
"However, it's so new we don't want to sound as if we're hyping the cure to all scientific ills. We're asking, 'What could a computer provide us?' and we know we're ignoring the human factors in problems that may stump the fastest computer. That'll have to be worked out."
About Sandia National Laboratories
Sandia National Laboratories is a multiprogram laboratory operated and managed by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.
Source: Sandia National Laboratories
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.