Visit additional Tabor Communication Publications
November 10, 2008
OAK RIDGE, Tenn., Nov. 10 -- The latest upgrade to the Cray XT Jaguar supercomputer at the Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) has increased the system's computing power to a peak 1.64 "petaflops," or quadrillion mathematical calculations per second, making Jaguar the world's first petaflop system dedicated to open research. Scientists have already used the newly upgraded Jaguar to complete an unprecedented superconductivity calculation that achieved a sustained performance of more than 1.3 petaflops.
"Jaguar is one of science's newest and most formidable tools for advancement in science and engineering," said Dr. Raymond L. Orbach, DOE's Under Secretary for Science. "It will enable researchers to simulate physical processes on a scale never seen before, and approach convergence for dynamical processes never thought possible. High end computation will become the critical third pillar for scientific discovery, along with experiment and theory."
The upgrade at DOE's Oak Ridge National Leadership Computing Facility represents a major milestone in a four-year project, begun in 2004 when DOE's Office of Science launched a sustained effort to upgrade supercomputing capabilities for unclassified research at DOE's complex of national laboratories. The project to build a petaflops machine -- completed on time, on budget and exceeding the original scope -- included partnerships with industry to develop new hardware and computer architectures.
"With the expansion of the leadership computing resources at Oak Ridge, the Department of Energy is continuing to deliver state-of-the-art computational platforms for open, high-impact scientific research," said Michael Strayer, associate director of the DOE Office of Science for Advanced Scientific Computing Research. "The new petaflops machine will make it possible to address some of the most challenging scientific problems in areas such as climate modeling, renewable energy, materials science, fusion and combustion."
Within hours of access to the Oak Ridge supercomputer, an ORNL team became the first to achieve sustained petascale performance on a scientific application. In 1998, another ORNL team was the first to achieve sustained terascale performance for science. Thomas Zacharia, Associate Laboratory Director for Computing and Computational Sciences, said he expects that Jaguar "will drive new developments that in turn will lead to energy technology innovations."
Supercomputing holds significant promise for U.S. economic competitiveness, including the promise of enabling American industry to perform "virtual prototyping" of complex systems and products. Jaguar will enable companies to reduce development costs and shorten the time required to market new technologies.
Jaguar is the result of a partnership among DOE, ORNL and Cray that has pushed computing capability at a rapid pace. The current upgrade is the result of an addition of 200 cabinets of Cray XT5 to the existing 84 cabinets of the XT4 Jaguar system.
During the third quarter of 2008 Cray achieved a major milestone by successfully deploying all of the cabinets for the petaflops system, ahead of schedule. Starting at 26 TF (26 trillion calculations per second) in 2006, the XT system grew 60-fold in capability through a series of upgrades to what is today the world's most capable system dedicated to open scientific research. Jaguar uses over 45,000 of the latest quad-core Opteron processors from AMD and features 362 terabytes of memory and a 10-petabyte file system. The machine has 578 terabytes per second of memory bandwidth and unprecedented input/output (I/O) bandwidth of 284 gigabytes per second to tackle the biggest bottleneck in leading-edge systems -- moving data into and out of processors. The upgraded Jaguar will undergo rigorous acceptance testing in late December before transitioning to production in early 2009.
Among the most powerful open scientific computing systems in the world, Jaguar is already in high demand by scientists who are honing their codes to take advantage of the machine's blistering speed. The Jaguar petaflops system is unique in the balance it represents among speed, power, and other elements essential to scientific discovery. Several design choices make it an excellent machine for computational sciences -- including more memory than any other machine by almost a factor of three, more powerful processors, more I/O bandwidth and the high-speed SeaStar network developed specifically for very-high-performance computing.
Annually, 80 percent of the Leadership Computing Facility resources are allocated through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a competitively selected, peer reviewed process open to researchers from universities, industry, government and non-profit organizations. Scientists and engineers at DOE's Oak Ridge National Laboratory are finding an increasing variety of uses for the Cray XT system. A recent report identified 10 breakthroughs in U.S. computational science during the past year. Six of the breakthroughs involved research conducted with the Jaguar supercomputer, including a first-of-its-kind simulation of combustion processes that will be used to design more efficient automobile engines. Read the computational science report [PDF].
The landmark DOE Office of Science report, "Facilities for the Future of Science: A Twenty Year Outlook," published in 2003 listed ultrascale scientific computing capability as the second highest priority. The report was the first long-range facilities plan prioritized across disciplines ever issued by a government science funding agency anywhere in the world. In preparing the report, Dr. Orbach set out to increase by a factor of 100 the computing capability available to support open (as opposed to classified) scientific research -- reducing from years to days the time required to simulate complex systems to understand the combustion process, model thermal reactions, analyze climate change data, reveal chemical mechanisms of catalysts, and study the collapse of a supernova. The report also stipulated that the Office of Science supercomputing facilities would be available to all researchers, subject to proposal peer review.
Learn additional information about the Leadership Computing Facility.
Source: U.S. Department of Energy
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.