Visit additional Tabor Communication Publications
January 28, 2013
The 20 petaflop, third-generation IBM BlueGene system, Sequoia, may be the number two supercomputer according to the latest TOP500 rankings, but when it comes to max core usage, Sequoia has apparently set a new record. A team of Stanford engineers harnessed one million of Sequoia's nearly 1.6 million CPUs in parallel to solve a sophisticated fluid dynamics problem.
Sequoia, the crown jewel of Lawrence Livermore National Laboratory (LLNL), was the fastest supercomputer in the world from June 2012 until November 2012, when it was knocked from its perch by another DOE machine, Titan, the 27 petaflop (peak) Cray XK7 system installed at Oak Ridge National Lab. Sequoia's 96 racks house 98,304 compute nodes, nearly 1.6 million cores and 1.6 petabytes of memory, connected by a 5-dimensional torus interconnect.
Researchers from Stanford Engineering's Center for Turbulence Research (CTR) used Sequoia to model the noise output of supersonic jet engines with the aim of designing quieter aircraft engines. Minimizing this dangerous acoustical hazard is important not only for the health and safety of the ground grew, but for the surrounding communities. In addition to the hearing damage that can result from sustained high-decibel exposure, there is a "noise nuisance" factor that affects property values.
Advanced computer models called predictive simulations enabled scientists to "look" inside the engine's harsh environment to examine processes that would otherwise be off-limits to physical experimental designs. The information attained from this data-intensive simulation helps researchers gain insight into the "physics of noise."
|Jet noise simulation. A new design for an engine nozzle is shown in gray at left. Exhaust temperatures are in red/orange. The sound field is blue/cyan. (Source: the Center for Turbulence Research, Stanford University)|
"Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed," said Parviz Moin, the Franklin M. and Caroline P. Johnson Professor in the School of Engineering and Director of CTR.
For Joseph Nichols, a research associate who worked on the project, and the rest of the team, there is a lot to celebrate: the successful full-scale implementation of Sequoia, breaking the million-core barrier, and the real-world benefits of this research.
"These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Center for Turbulence Research previously," said Nichols. "The implications for predictive science are mind-boggling."
The project relied on a code called CharLES that was developed by former Stanford senior research associate, Frank Ham. A high-ﬁdelity unstructured compressible ﬂow solver, CharLES is an ideal code for aeroacoustic applications characterized by high-speed ﬂows and complex geometries.
CFD simulations are a good way to test the entire supercomputer, because they stress all the components, computation, memory and communication. Ideally, systems with more cores should be able to handle more difficult problems in less time, but system complexity comes with its own challenges and million-way parallelism can create unexpected bottlenecks.
As computers continue to hit their 1000-fold marks, one of the most difficult tasks is developing real-world applications that can scale to make use of the entire machine. Sequoia is already making something of a name for itself in this regard. Last month, the system achieved nearly 14 petaflops on the Hardware/Hybrid Accelerated Cosmology Codes (HACC), just a couple of petaflops shy of its 16.2 petaflop Linpack measurement (and nearly 70 percent of its peak flops).
This latest announcement from Stanford didn't discuss FLOPS, but we can gather that the jet engine simulation employed nearly two-thirds of Sequoia's total core count (one-million out of a possible 1,572,864). In the ideal scenario, all available cores would be put to use, but that proposition gets more difficult every decade. Exascale computers, for example, will likely have billions of cores. What will it take to achieve billion-way parallelism?
"Every generation in computing increases the complexity of the system," noted Mark Seager, former assistant department head for advanced computing technology at LLNL's Integrated Computing and Communications Department, in a DOE Office of Science feature.
"Every factor of 10 improvement in computing-delivered performance brings an entirely new vista of problems that we can solve and physics that we can investigate, but to scale up by a factor of 10 in parallelism isn't easy," he added.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.