The Cray XT3 – A Unique Resource at PSC

By Nicole Hemsoth

June 9, 2006

Michael Levine (left) and Ralph Roskies (right)The Pittsburgh Supercomputing Center (PSC) is well-known for its cutting-edge research and its ability to transform new technologies into useful scientific tools. Within the past year, PSC's new Cray XT3 supercomputer has been used for some exciting new work and has proved to be one of the most powerful computational resources on the TeraGrid.

HPCwire recently got the opportunity to talk with the two PSC scientific co-directors, Michael Levine and Ralph Roskies, and ask them about new developments at PSC and about what's in store for the center's future. In part one of this two-part interview, Roskies and Levine discuss the significance of PSC's Cray XT3 supercomputer. 

HPCwire: PSC's 10-teraflop Cray XT3, which became a production resource on the TeraGrid last October, was the first Cray XT3 anywhere and is the only one available to NSF researchers. What led you to decide on this system and what advantages does it have as a resource for computational science?

Roskies: We have discovered in the past that if we can bring a substantially new technical capability into production we can open up new fields of science. In particular, we seek systems that when used as a whole make it possible to tackle problems that were previously infeasible.

One particular technical strength of the XT3 that attracted us is its interconnect. Like LeMieux, our HP terascale system that preceded it, the XT3 is a tightly coupled system with a very strong interconnect. The XT3 interconnect is a significant advance in interconnect technology since LeMieux, and it's substantially better than competing systems.

The superior interconnect is a large advantage for projects that demand hundreds or thousands of processors working together. Because of the advanced interconnect, the processors share information much more quickly than they otherwise would, and this makes a very meaningful difference for many of the most demanding kinds of science that can be attacked with supercomputing.

The other feature that attracted us to the XT3 was the excellent balance between processor speed and memory bandwidth that the Opterons display. To realize a larger fraction of peak performance on real scientific applications, one has to be sure that one can supply the processors with enough operands to keep busy.

HPCwire: On a processor-clock basis, the XT3 is 2.4 times faster than LeMieux, your six-teraflop system, yet reports are that the XT3 boosts performance more than ten-fold on some applications. How is this accomplished?

Levine: We've run dozens of codes on the XT3 over the past year, and sometimes we're seeing performance increases of an order of a magnitude and more. There are several factors involved in this. First is the interconnect. As Ralph pointed out, the XT3 interconnect is a substantial improvement over LeMieux. That factor alone represents about an order of magnitude for large-scale parallel applications. This is over and above the speedup from faster processors.

The XT3 also has better memory bandwidth than LeMieux. The interconnect provides the means for each processor to communicate with other processors. Memory bandwidth is the ability of each processor to communicate with its own local memory. Even correcting for the faster processor speed, the memory bandwidth of the XT3 is 33 percent better than LeMieux.

A third factor is the software. LeMieux's operating system is more intrusive. The operating system in LeMieux and in most clusters resides in each processor and is meant to support that processor as an independent entity. This is good if the objective is for each processor to operate independently, but that's not our objective.

The features that allow the processors to be independently supported get in the way when you have a large number of processors working together. They take up space in memory. They also make requests to the processor at inopportune times. Depending on the application, the operating system can severely reduce efficiency.

The operating system of the XT3 is designed to avoid both of these problems. It has only what it needs to run calculations. It doesn't have to support the system as a whole. That's done by a small fraction of the processors that support the entire machine.

HPCwire: How does the XT3 as a system differ from Linux clusters, which, at least nominally, offer more capacity per dollar?

Roskies: Essentially, it's what we just talked about. The network between processors performs at a much higher level, and the operating system is better designed to facilitate large-scale parallelism. The fact that there isn't a full operating system on each node gives you much more reliability. The XT3 is easier to manage as a unified system because it's designed to operate that way – as opposed to hundreds or thousands of stand-alone processors connected without careful attention to how they interact.

In terms of raw capacity per dollar, it's certainly true that clusters are less expensive. But you don't have the advanced interconnect, the manageability or the reliability.

There are projects that fit well with clusters, projects that are loosely coupled, or what can be called “pleasantly parallel” – in effect, a task that breaks down to many individual jobs running independently of each other, without the need for interprocessor communication. For jobs like that, the XT3 isn't a cost-effective use of resources.

For many important large-scale scientific applications, however, the parallelized parts of the overall task need to closely coordinate and communicate with each other as the computation proceeds. Many of the major projects we've worked on at Pittsburgh are of this tightly coupled nature. The storm forecasting work of Kelvin Droegemeier, for example. Earthquake simulation. Molecular dynamics, especially the NAMD application developed in Klaus Schulten's group, which scales well to a large number of processors, and other kinds of molecular modeling.

Basically, any kind of application that scales efficiently to hundreds or thousands of processors and that requires a high degree of inter-processor communication is going to perform more efficiently on the XT3 by a large factor.

HPCwire: What particular strengths does the XT3 add to the TeraGrid repertoire of resources?

Levine: It brings all the advantages we've already talked about in terms of interconnect, memory bandwidth, reliability, and operating system and provides a resource for applications that can exploit these advantages. Although the compute portion of the XT3 is specialized for computation in the ways we've talked about, the XT3 also has a “public face” carried by Linux nodes that allow us to smoothly integrate the XT3 with other components of the TeraGrid.

The predecessor system, LeMieux, a less evolved version of tightly coupled architecture, has been a major production resource for NSF and the TeraGrid since 2001. LeMieux had demonstrated excellent scaling – the ability to add-on a large number of processors without substantially degrading the per-processor performance. For this reason, researchers with large-scale parallel projects fairly quickly caught on to the advantages of this system, and most of the computing time was devoted to jobs using at least 512 processors and many using 1,024 processors and more.

The XT3 represents newer, better technology than LeMieux, and it succeeds LeMieux as the best TeraGrid resource for the most demanding highly parallel projects – the kind of projects often referred to as “capability computing.” The ones that stretch the envelope of what can be done in scientific computing.

HPCwire: What scientific results to date have come from availability of the XT3?

Roskies: One of our early successes with the XT3 is Paul Woodward's work on turbulence. He and his colleagues in Minnesota turned to the XT3 specifically because of its superior interconnect, which for them has enabled interactive steering of turbulent flow simulations in real time. Nobody has done this before.

They want to be able to represent the large-scale effects of small-scale turbulence, a problem that comes up in many kinds of flow, from pipes to internal-combustion engines to atmospheric weather patterns. Their focus is turbulent convection in giant stars. From small-scale runs they can define parameters they can then use with large-scale models.

They demonstrated their ability to do smaller-scale, interactive runs with the XT3 twice last year- at iGrid in San Diego and at SC|05. They relied not only on the XT3's very fast interconnect, but also on software, called PDIO, that our staff developed. PDIO expands on the basic IO capabilities of XT3, making it possible for an application to route data from the XT3 compute sector in real time to remote users on the wide-area network. This makes it possible for Paul and his team to visualize the data live at the other end of the TeraGrid pipe and adjust parameters on the fly to see how it affects the simulation.

Some of our PSC scientists have also used the XT3 to good effect. Yang Wang, a physicist here, has deployed software he helped develop called LSMS, which performs astonishingly well on the XT3. It sustains more than 8 teraflops on 2,048 processors – 82 percent of theoretical peak. Yang used LSMS for an ab initio quantum calculation of the magnetic and electronic structure of an iron nanoparticle of more than 4,400 atoms. This size of nanoparticle hasn't been modeled before at the quantum level, and the XT3 makes this possible. Being able to do these calculations at this particle size and larger is going to be important in developing next-generation data-storage technologies.

A couple of our scientists, Troy Wymore and Shawn Brown, used the XT3 for quantum mechanical/molecular mechanics simulation of aldehyde dehydrogenase, a major family of enzymes. They used 900 processors with software called Dynamo, and they looked at proton tunneling effects in the enzyme's active site. These interactions are involved in a couple of metabolic diseases, and they affect how well chemotherapy drugs work to fight cancer.

There's also been substantial work in Michael Klein's group at the University of Pennsylvania, which does molecular modeling using classical and quantum molecular dynamics codes. Their codes need high bandwidth, both interprocessor and to memory, and they've found that the XT3 is the best machine available for a large proportion of their work. It has stretched scalability of their codes – by a factor of two with NAMD and also with Car-Parinello molecular dynamics – and dramatically increased productivity.

HPCwire: Cray, Inc. has undergone personnel changes within the past year. Has that affected their ability to support the XT3 or changed their relation with PSC?

Levine: We have been working on integrating XT3 into the scientific community and into the operational environment at PSC for over two years. We've benefited a great deal from cooperation with Cray and also with Sandia National Labs.

The XT3 – as many people know – is the product version of Red Storm, a machine commissioned from Cray by Sandia. Red Storm has some features that are important in a classified environment, and that aren't important to us and aren't part of the XT3, otherwise it's the same machine. Despite these differences, this machine was fundamentally architected by people at Sandia, with much of the detailed design by Cray.

The XT3 is off to a strong start in the HPC market, with large-scale installations in addition to ours either installed or due soon at other major sites, including Oak Ridge National Laboratory, the United Kingdom's AWE plc, the Swiss National Supercomputer Center, the Japan Advanced Institute of Science and Technology, the Japan Science and Technology Agency and the Western Australia Supercomputing Program.

The personnel changes at Cray respond to this marketplace success and support a stronger focus on XT3 and its follow-on products. This improves their ability to maintain strong relations with all their customers, including PSC, to support the XT3, and it improves our ability to interact with them. This is quite important because, as with any fundamentally new machine such as this, there's a great deal of close work that has to go on between the early adopters, which has been our role with many systems, and the vendor.

—–

Dr. Michael Levine is Professor of Physics at Carnegie Mellon University, specializing in theoretical particle physics. He is also a founder and Co-Scientific Director of the Pittsburgh Supercomputing Center (PSC). He is the author of numerous papers in computational, theoretical, and particle physics. His physics research over the last few years has been in high order quantum electrodynamics. His earlier work in Physics includes a series of papers applying symbolic computation methods and computational systems devised by him, to fundamental problems in electrodynamics done in collaboration with Professor Ralph Roskies. Professor Levine initiated Carnegie Mellon's degree program in Computational Physics and continues to teach courses in that program. In 1984, together with Ralph Roskies and James Kasdorf of Westinghouse Electric Company, he wrote the proposal to the National Science Foundation for what was eventually to become the PSC. As Scientific Director at PSC, he continues to oversee operations, plan its future course, and concern himself with its scientific impact. He also serves as the Associate Provost for Scientific Computing for Carnegie Mellon University.

Dr. Ralph Roskies is Professor of Physics at the University of Pittsburgh and a founder and Co-Scientific Director of the Pittsburgh Supercomputing Center (PSC). He is the author of over 60 papers in theoretical elementary particle physics. In 1984, together with Professor Michael Levine of Carnegie Mellon University and James Kasdorf from Westinghouse, he developed the proposal to the National Science Foundation for what became the PSC. As Scientific Director, Roskies oversees operations, plans its future course, and concerns himself with its scientific impact. The PSC has been a national leader in providing the highest capability computing to the US national research community. It has pioneered developments in file systems, heterogeneous computing, parallel algorithms and scientific visualization. It currently fields the Terascale Computing System and the first Cray XT3, two of the world's most powerful academically-based computing facilities dedicated to open scientific research. Roskies' pivotal role in developing and implementing the NSF allocation process has given him a very broad overview of leading computational science and close ties to its most prominent practitioners. He has served as advisor to and as reviewer of a large number of U.S. and international supercomputing centers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire