Cray to Provide NOAA with Two AMD-Powered Supercomputers

By Tiffany Trader

February 24, 2020

Editor’s note: This article is the follow-up to our initial coverage. We’ve since got the system details, which we report on here. Also read our related coverage on NOAA’s AI strategy.

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. The long runway gives the managed service provider, CSRA (a General Dynamics Information Technology company), about a year to get the equipment in place, configured and accepted, and then from February of 2021 to February of 2022, NOAA will transition its code base over from the current systems.

With this hardware upgrade, ongoing model enhancements and NOAA’s emerging Earth Prediction Innovation Center (EPIC), NOAA says the United States is keeping pace with other leading weather forecasting centers around the world. The prominence of the U.S. weather forecasting capabilities has at times been called into question, perhaps most notably when U.S. models stumbled while forecasting Hurricanes Sandy and Harvey.

The new supercomputing deployment represents a tripling of operational computational capacity for the U.S. weather forecasting agency.

Each identical Cray Shasta system spans 2,560 dual-socket nodes — housed in 10 cabinets — powered by second-gen AMD Epyc ‘Rome’ 64-core 7742 processors, connected by Cray’s Slingshot network. The total system memory per machine is 1.3 petabytes. Cray’s ClusterStor systems provide 26 petabytes of storage per site (a flash storage system with 614 terabytes of usable space and two HDD file systems with 12.5 petabytes of usable storage).

The peak theoretical performance of each Cray system is 12 petaflops, which combined with NOAA’s research and development machines brings the agency’s aggregate operational and research capacity to 40 peak petaflops. Shasta systems haven’t hit the Top500 list yet, but at a ballpark 80 percent Linpack efficiency, they’d be looking at a 25th place ranking on the current (Nov. 2019) list. As always though — and no more so than for weather prediction and storm forecasting — the only thing that matters is real-world performance. HPCwire spoke with some of the NOAA/NWS HPC team about what international leadership means to them.

“You can imagine there are a lot of different ways you can measure leadership,” said Brian Gross, director of Environmental Modeling Center for NOAA’s National Weather Service. “[You can] measure it by hurricane track, accuracy of the upper level flow, surface temperature anomalies… it really depends on what your application is. [Regarding] how good the model is, we’re always compared to some of the [leading] centers worldwide. And we actually work pretty closely with the other worldwide operational centers. We have scientific exchanges with the European Center, for example. So the idea that we’re in a fierce competition is kind of a weird one for us as we work with these folks on a pretty regular basis.”

Photo of Luna courtesy NOAA (2016)

Housed at GDIT-managed facilities in Manassas, Virginia, and Phoenix, Arizona, the new Crays will replace eight smaller machines that comprise a heterogeneous mix of processor and cluster types. Moving to a unified architecture will streamline NOAA’s operations, while maintaining the weather center’s primary-plus-backup workflow (more on that below).

The outgoing equipment includes older IBM iDataplex gear, a pair of Cray XC40s (Luna and Surge), that were deployed  in 2016, and a pair of Dell systems (Mars and Venus) installed in 2018. The agency is currently adding additional Dell machines to update the iDataplex systems so they are maintainable for the final two years of the managed service contract (with IBM).

Recall that NOAA’s operational centers are still managed by IBM, which procured the Cray and Dell systems after its x86 business was transferred to Lenovo in 2014. That IBM contract is up in February of 2022, at which time, GDIT will take over.

The transition to a new managed service provider coincides with a change in filesystem technology. After about 20 years of being on GPFS, NOAA is switching its systems over to Lustre. The move should not be seen as reflecting NOAA’s preference for a given filesystem, rather the agency provided the specification for performance-based requirements for the contract and what it required in terms of availability (99 percent system availability) as part of the open bid process and let industry decide what the best fit was in terms of the total proposed solution. “We were essentially looking for what the best fit was for what the integrator could provide…[and] the best performance-per-dollar with the availability requirements that we require for operational use of the system,” David Michaud, director, Office of Central Processing for NOAA’s National Weather Service, told HPCwire.

The decision to go with homogeneous x86 systems was made in a similar manner. NOAA asked the integrator to provide the best solution on the benchmark codes it utilizes. Meanwhile NOAA is exploring GPU technology on its research and development systems, and keeping its options open for the next hardware procurement. The contract with GDIT (there’s an 8-year base with a 2-year optional renewal) is split into two periods. The first task order covers the two Cray CPU-based systems, but the second period is still undefined, affording NOAA time to explore and assess the realm of possibilities as technology develops and as leadership computing facilities, many of which have moved or are moving to heterogeneous GPU-powered systems, help develop and influence technological advancements.

The twin Cray systems are perfectly symmetrical between geographically-segregated sites (Manassas, Virginia, and Phoenix, Arizona), and take turns acting as the primary or backup system. Michaud explained that on any given day, NOAA can run at production, its full operational 24×7 modeling suite on one of the systems. The backup system is used for transition to operations and other development work while it’s not in use as the primary, and NOAA can switch the orientation of the primary and the backup site in operations within a 15 minute period, and does so regularly, at least on a monthly basis.

The arrangement assures redundancy, as data is always mirrored to the backup system, offering advantages from a troubleshooting and maintenance perspective and providing an added layer of protection for the mission- and safety-critical work of weather prediction. “If we make a change to one, we know we can test it, and then we can apply the change to the back-up system as well,” said Michaud. “We know if one’s not behaving similar to the other, we can identify the differences and troubleshoot them. And then, the other thing that’s really important is for the type of work that we do, given storm systems and other weather systems can be massive in scale and encompass hundreds of miles, it’s really beneficial for us to have the separation of the sites, so that if we have any issues on one site, we can switch to the other site.”

The significant supercomputing upgrade targets three separate areas for model improvements: resolution, complexity and the size of ensembles. “We want to go to higher resolutions that would capture the finer-scale features in the phenomena we’re predicting,” Gross told us. “We want to create and implement more comprehensive models to include as much of the complexity that is going on in the atmosphere as we can in our models — can we improve our model physics, for example. And then the last piece is growing the size of ensembles that we use, which give us a lot of information in how certain we can be in any particular forecast; ensembles inform our level of confidence in the numerical guidance that we produce. All of these areas are going to be improved when we move on to the system.”

Last summer, NOAA upgraded its deterministic global forecast system, and its next big upgrade will be the ensemble system. Currently, NOAA is incorporating the new dynamical core that it put into the deterministic Global Forecast System (GFS) into the Global Ensemble Forecast System (aka the GES). “We’re aligning the ensemble system now with the deterministic system — and part of that is complexity and ensemble size. We’re looking forward to increasing the ensemble size of the GES so that we can get better information on forecast services,” said Gross.

NOAA’s contract with GDIT has a total estimated value of $505.2 million, spanning a base period of eight years with a two-year optional renewal. GDIT provides its supercomputing resources as-a-service through NOAA’s Weather and Climate Operational Supercomputing System (WCOSS) contract. The value of the first task order, written under the larger contract, is $150 million dollars to provide managed services over five years.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yesterday, Intel reported an Optane and DAOS-based system finishe Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows users to virtually “walk” around the massive supercomputer Read more…

By Oliver Peckham

Supercomputer Simulations Examine Changes in Chesapeake Bay

August 8, 2020

The Chesapeake Bay, the largest estuary in the continental United States, weaves its way south from Maryland, collecting waters from West Virginia, Delaware, DC, Pennsylvania and New York along the way. Like many major e Read more…

By Oliver Peckham

Student Success from ‘Scratch’: CHPC’s Proof is in the Pudding

August 7, 2020

Happy Sithole, who directs the South African Centre for High Performance Computing (SA-CHPC), called the 13th annual CHPC National conference to order on December 1, 2019, at the Birchwood Conference Centre in Kempton Pa Read more…

By Elizabeth Leake

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

AWS Solution Channel

AWS announces the release of AWS ParallelCluster 2.8.0

AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists, researchers, and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision for Read more…

By Hartwig Anzt and Jack Dongarra

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yeste Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows use Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implem Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This