Cray to Provide NOAA with Two AMD-Powered Supercomputers

By Tiffany Trader

February 24, 2020

Editor’s note: This article is the follow-up to our initial coverage. We’ve since got the system details, which we report on here. Also read our related coverage on NOAA’s AI strategy.

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. The long runway gives the managed service provider, CSRA (a General Dynamics Information Technology company), about a year to get the equipment in place, configured and accepted, and then from February of 2021 to February of 2022, NOAA will transition its code base over from the current systems.

With this hardware upgrade, ongoing model enhancements and NOAA’s emerging Earth Prediction Innovation Center (EPIC), NOAA says the United States is keeping pace with other leading weather forecasting centers around the world. The prominence of the U.S. weather forecasting capabilities has at times been called into question, perhaps most notably when U.S. models stumbled while forecasting Hurricanes Sandy and Harvey.

The new supercomputing deployment represents a tripling of operational computational capacity for the U.S. weather forecasting agency.

Each identical Cray Shasta system spans 2,560 dual-socket nodes — housed in 10 cabinets — powered by second-gen AMD Epyc ‘Rome’ 64-core 7742 processors, connected by Cray’s Slingshot network. The total system memory per machine is 1.3 petabytes. Cray’s ClusterStor systems provide 26 petabytes of storage per site (a flash storage system with 614 terabytes of usable space and two HDD file systems with 12.5 petabytes of usable storage).

The peak theoretical performance of each Cray system is 12 petaflops, which combined with NOAA’s research and development machines brings the agency’s aggregate operational and research capacity to 40 peak petaflops. Shasta systems haven’t hit the Top500 list yet, but at a ballpark 80 percent Linpack efficiency, they’d be looking at a 25th place ranking on the current (Nov. 2019) list. As always though — and no more so than for weather prediction and storm forecasting — the only thing that matters is real-world performance. HPCwire spoke with some of the NOAA/NWS HPC team about what international leadership means to them.

“You can imagine there are a lot of different ways you can measure leadership,” said Brian Gross, director of Environmental Modeling Center for NOAA’s National Weather Service. “[You can] measure it by hurricane track, accuracy of the upper level flow, surface temperature anomalies… it really depends on what your application is. [Regarding] how good the model is, we’re always compared to some of the [leading] centers worldwide. And we actually work pretty closely with the other worldwide operational centers. We have scientific exchanges with the European Center, for example. So the idea that we’re in a fierce competition is kind of a weird one for us as we work with these folks on a pretty regular basis.”

Photo of Luna courtesy NOAA (2016)

Housed at GDIT-managed facilities in Manassas, Virginia, and Phoenix, Arizona, the new Crays will replace eight smaller machines that comprise a heterogeneous mix of processor and cluster types. Moving to a unified architecture will streamline NOAA’s operations, while maintaining the weather center’s primary-plus-backup workflow (more on that below).

The outgoing equipment includes older IBM iDataplex gear, a pair of Cray XC40s (Luna and Surge), that were deployed  in 2016, and a pair of Dell systems (Mars and Venus) installed in 2018. The agency is currently adding additional Dell machines to update the iDataplex systems so they are maintainable for the final two years of the managed service contract (with IBM).

Recall that NOAA’s operational centers are still managed by IBM, which procured the Cray and Dell systems after its x86 business was transferred to Lenovo in 2014. That IBM contract is up in February of 2022, at which time, GDIT will take over.

The transition to a new managed service provider coincides with a change in filesystem technology. After about 20 years of being on GPFS, NOAA is switching its systems over to Lustre. The move should not be seen as reflecting NOAA’s preference for a given filesystem, rather the agency provided the specification for performance-based requirements for the contract and what it required in terms of availability (99 percent system availability) as part of the open bid process and let industry decide what the best fit was in terms of the total proposed solution. “We were essentially looking for what the best fit was for what the integrator could provide…[and] the best performance-per-dollar with the availability requirements that we require for operational use of the system,” David Michaud, director, Office of Central Processing for NOAA’s National Weather Service, told HPCwire.

The decision to go with homogeneous x86 systems was made in a similar manner. NOAA asked the integrator to provide the best solution on the benchmark codes it utilizes. Meanwhile NOAA is exploring GPU technology on its research and development systems, and keeping its options open for the next hardware procurement. The contract with GDIT (there’s an 8-year base with a 2-year optional renewal) is split into two periods. The first task order covers the two Cray CPU-based systems, but the second period is still undefined, affording NOAA time to explore and assess the realm of possibilities as technology develops and as leadership computing facilities, many of which have moved or are moving to heterogeneous GPU-powered systems, help develop and influence technological advancements.

The twin Cray systems are perfectly symmetrical between geographically-segregated sites (Manassas, Virginia, and Phoenix, Arizona), and take turns acting as the primary or backup system. Michaud explained that on any given day, NOAA can run at production, its full operational 24×7 modeling suite on one of the systems. The backup system is used for transition to operations and other development work while it’s not in use as the primary, and NOAA can switch the orientation of the primary and the backup site in operations within a 15 minute period, and does so regularly, at least on a monthly basis.

The arrangement assures redundancy, as data is always mirrored to the backup system, offering advantages from a troubleshooting and maintenance perspective and providing an added layer of protection for the mission- and safety-critical work of weather prediction. “If we make a change to one, we know we can test it, and then we can apply the change to the back-up system as well,” said Michaud. “We know if one’s not behaving similar to the other, we can identify the differences and troubleshoot them. And then, the other thing that’s really important is for the type of work that we do, given storm systems and other weather systems can be massive in scale and encompass hundreds of miles, it’s really beneficial for us to have the separation of the sites, so that if we have any issues on one site, we can switch to the other site.”

The significant supercomputing upgrade targets three separate areas for model improvements: resolution, complexity and the size of ensembles. “We want to go to higher resolutions that would capture the finer-scale features in the phenomena we’re predicting,” Gross told us. “We want to create and implement more comprehensive models to include as much of the complexity that is going on in the atmosphere as we can in our models — can we improve our model physics, for example. And then the last piece is growing the size of ensembles that we use, which give us a lot of information in how certain we can be in any particular forecast; ensembles inform our level of confidence in the numerical guidance that we produce. All of these areas are going to be improved when we move on to the system.”

Last summer, NOAA upgraded its deterministic global forecast system, and its next big upgrade will be the ensemble system. Currently, NOAA is incorporating the new dynamical core that it put into the deterministic Global Forecast System (GFS) into the Global Ensemble Forecast System (aka the GES). “We’re aligning the ensemble system now with the deterministic system — and part of that is complexity and ensemble size. We’re looking forward to increasing the ensemble size of the GES so that we can get better information on forecast services,” said Gross.

NOAA’s contract with GDIT has a total estimated value of $505.2 million, spanning a base period of eight years with a two-year optional renewal. GDIT provides its supercomputing resources as-a-service through NOAA’s Weather and Climate Operational Supercomputing System (WCOSS) contract. The value of the first task order, written under the larger contract, is $150 million dollars to provide managed services over five years.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This