Visit additional Tabor Communication Publications
June 22, 2009
Collaboration begins with development of next generation tools for tomorrow's petaflop-plus high performance computing systems
SAN JOSE, Calif., June 22 -- Allinea Software, a leading provider of development tools for large-scale parallel high performance computing applications, announced a new multi-year contract with Oak Ridge National Laboratory (ORNL) to develop petascale debugging software, capable of handling 225,000 simultaneous processes and above, for next generation supercomputers.
The collaboration will begin by scaling up the capabilities of Allinea DDT (the Distributed Debugging Tool) on the Cray XT5 supercomputer, nicknamed "Jaguar," which was installed in 2008 at the ORNL facility in Tennessee. Scientists at ORNL have already used the Cray system to set a new world record for a sustained performance of more than one petaflop (a quadrillion mathematical calculations per second) on two scientific applications.
The decision to select Allinea for the petascale debugging project was the result of an extended evaluation process. "Our users liked the look and feel of Allinea's DDT debugging tool," said Richard L. Graham, Applications Performance Tools Group Leader, ORNL. "We also knew that DDT was being used at several HPC computing centers, such as Lawrence Livermore Labs, NERSC, TACC, and several European supercomputing centers."
"The incredible speed and processing capacity of the Cray XT5 system enables the scientists and researchers at Oak Ridge to make discoveries and address critical challenges in areas such as climate modeling, renewable energy, and materials science that could not be solved otherwise," said Barry Bolding, Cray's vice president of scalable systems. "We're pleased that Allinea has been selected by Oak Ridge National Laboratory to develop a next generation debugging tool to support our system."
Dr. David Lecomber, CTO and head of the Allinea development team, welcomes the challenge. "This multi-year agreement with ORNL is a huge opportunity for us and confirms that DDT has the capability to perform at every scale, including at the petaflop level. Our project with ORNL, as well as our recent collaborations with other U.S. labs such as TACC and Lawrence Livermore, and our recent agreement with CEA in France, establishes DDT as the technology of choice with major tier one HPC users. It clearly demonstrates the confidence developers have in our products in terms of providing the accessibility, stability, and scalability needed for tomorrow's high performance, petaflop supercomputers."
"Today's supercomputers, like the Cray XT5, have become incredibly large, "continues Lecomber. "Parallel jobs now have orders of magnitude higher process counts than ever before. Therefore, a new breed of debugging and optimization tools is needed. Our work with ORNL will empower scientists and researchers to take full advantage of the incredible speed and computational capacity of systems like the Cray XT5, and free them to concentrate on conducting scientific research rather than on the mechanics of petascale computing."
"We're very pleased to be working with HPC leaders like ORNL and Cray. This next generation of debugging tools we are developing will not only benefit our customers in the U.S., but all the customers Allinea serves on a global basis."
Allinea Software will be an exhibitor at the International Supercomputing Conference (ISC2009) in Hamburg, Germany,from June 23-26, 2009.
About Oak Ridge National Laboratory
Oak Ridge National Laboratory (ORNL) is the Department of Energy's largest science and energy laboratory and has been managed since 2000 by a partnership of the University of Tennessee and Batelle. With a staff of more than 4,400, ORNL is the world's foremost center for neutron science, and is an international leader in a range of other scientific areas including energy, high-performance computing, systems biology, materials science at the nanoscale, and national security. Their Leadership Computing facility is home to the world's most powerful supercomputers for open science. For more information, visit www.ornl.gov.
About Allinea Software Inc.
Allinea Software Inc.is a leading supplier of tools for multicore and high performance computing (HPC). The company was founded in 2001 by experts in large-scale parallel computing from Warwick and Oxford Universities, but expanded its activities in the US in 2007. Allinea's products are used at commercial, government, and academic sites worldwide, and set new standards for affordability and ease-of-use in parallel and multicore programming. With new product features aimed at multi-threaded applications and novel computing architectures, Allinea is bringing its wealth of experience in parallel tools to the rapidly-expanding arena of multicore processing. For more information, visit www.allinea.com.
Source: Allinea Software Inc.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.