Visit additional Tabor Communication Publications
November 12, 2012
OAK RIDGE, Tenn., Nov. 12 – The Department of Energy's Oak Ridge National Laboratory is again home to the most powerful computer in the world, according to the Top500 list, a semiannual ranking of computing systems around the world. The list was announced at this week's SC12 International Conference for High Performance Computing, Networking, Storage and Analysis in Salt Lake City, Utah.
Titan replaced the XT5 Jaguar at ORNL last month. Jaguar ranked as the world's fastest computer on the Top500 lists in November 2009 and June 2010, and now Titan is the scientific research community's most powerful computational tool for exploring solutions to some of today's most challenging problems.
"The new Top500 list clearly demonstrates the U.S. commitment to applying high-performance computing to breakthrough science, and that's our focus at Oak Ridge," said ORNL Director Thom Mason. "We'll deliver science from Day One with Titan, and I look forward to the advancements the Titan team will make in areas such as materials research, nuclear energy, combustion and climate science."
Titan is a Cray XK7 system that contains 18,688 nodes, each built from a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K20X GPU accelerator. Titan also has 710 terabytes of memory.
Its hybrid architecture - the combination of traditional central processing units (CPUs) with graphic processing units (GPUs) - is largely lauded as the first step toward the goal of exascale computing, or generating 1,000 quadrillion calculations per second using 20 megawatts of electricity or less.
Titan reached a speed of 17.59 petaflops on the Linpack benchmark test - the specific application that is used to rank supercomputers on the Top500 list. Titan is capable of a theoretical peak speed of 27 quadrillion calculations per second - 27 petaflops - while using approximately 9 megawatts of electricity, roughly the amount required for 9,000 homes.
That capability makes Titan 10 times faster than Jaguar with only a 20 percent increase in electrical power consumption - a major efficiency coup made possible by GPUs, which were first created for computer gaming.
"It's not practical or affordable to continue increasing supercomputing capacity with traditional CPU-only architecture," said ORNL's Jeff Nichols, associate laboratory director for computing and computational sciences. "Combining GPUs and CPUs is a responsible move toward lowering our carbon footprint, and Titan will enable scientific leadership by providing unprecedented computing power for research in energy, climate change, materials, and other disciplines."
Because they handle hundreds of calculations simultaneously, GPUs can perform many more calculations than CPUs in a given time. By relying on its 299,008 CPU cores to guide simulations and allowing its new NVIDIA GPUs to do the heavy lifting, Titan will enable researchers to run scientific calculations with greater speed and increased fidelity.
"The order of magnitude performance increase of Titan over Jaguar will allow U.S. scientists and industry to address problems they could only dream of tackling before," said Buddy Bland, Titan project manager at DOE's Oak Ridge Leadership Computing Facility. Scientists began using portions of Titan as it was under construction, demonstrating the significant capabilities of the hybrid system. Among early application areas:
* Materials research: The magnetic properties of materials could vastly accelerate numerous technologies such as next-generation electric motors and generators, and Titan already is allowing researchers to improve the calculations of a material's magnetic states as they vary by temperature.
* Fuel combustion: Because three-quarters of fossil fuels burned in America power cars and trucks, improving the efficiency of internal combustion engines is critical. Researchers will use Titan's unprecedented power to model combustion of large-molecule hydrocarbon fuels such as the isooctane (an important component of gasoline), commercially important oxygenated alcohols such as ethanol and butanol, and biofuel surrogates.
* Nuclear power: The U.S. acquires 20 percent of its power from nuclear plants, and Titan will lead the way to extending the life cycles of aging reactors and ensuring they remain safe. Titan allows researchers to simulate a fuel rod through one round of use in a reactor core in 13 hours, a job that took 60 hours on the Jaguar system.
Other efforts include calculating specific climate change adaptation and mitigation scenarios, obtaining a molecular description of thin films important for the emerging field of flexible organic electronic devices, and calculating radiation transport, a process important in fields ranging from astrophysics to medical imaging.
"Titan builds on the Oak Ridge Leadership Computing Facility's established reputation for enabling transformational discoveries across the scientific spectrum," Nichols said.
ORNL is managed by UT-Battelle for the Department of Energy. The Department of Energy is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.
The Oak Ridge Leadership Computing Facility supports national science priorities through deployment and operation of advanced supercomputers as part of DOE's commitment to providing scientists with world-leading research tools.
The Top500 project was started in 1993 to provide a basis for tracking and detecting trends in high-performance computing. Twice a year, a list of sites operating the 500 most powerful computer systems is released. The best performance on the High Performance Linpack benchmark is used as performance measure for ranking the computer systems. The list contains a variety of information including the system's specifications and its major application areas.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.