Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

By Oliver Peckham

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022 – would come from an unlikely source: Microsoft. At the HPC User Forum yesterday, Richard Lawrence, an IT fellow for supercomputing at the Met Office, detailed the service’s hunt for its next generation of supercomputing.

Out with the old, in with the new

The Met Office’s current XC40 systems. Image courtesy of Microsoft/Met Office.

Currently, the Met Office runs three Cray XC40 – each capable of about three to seven Linpack petaflops – at at least 80 to 90 percent utilization, thanks to a medley of weather forecasting and weather and climate research. Two of the systems are dedicated primarily to forecasting – one able to take over if the other fails – and the third is a research system. 

The new system(s) look much different. The Met Office will initially receive four Microsoft Azure-integrated HPE Cray EX supercomputers with AMD Epyc Milan CPUs, coupled with an active data archive system capable of supporting nearly four exabytes of data. The Met Office anticipates that these systems will deliver over 60 peak petaflops across the four quadrants, totaling a sixfold increase in the service’s computing power. 

Then, somewhere around 2027 through 2030, through the same procurement deal, the Met Office will receive a major infrastructure upgrade, further tripling its computing firepower for a total 18-times improvement over its current triplet Crays. 

The Met Office wishlist

“We realized we had to do something slightly different with our next procurement,” Lawrence explained. “It takes us on average about two years to procure any new supercomputer and then another year to bring into operation. So that’s a lot of time for a lot of people that we do with each procurement, and that’s really expensive and we don’t see particularly good value for us. So we wanted to see if we could change our approach to allow us to spend less time buying supercomputers and more time in utilizing them.”

So the Met Office set its sights high: it wanted a powerful system, and it wanted it for a long time. It went to market for a ten-year supercomputing deal – an eon in the fast-paced world of high-performance computing, especially for such a high-profile client – and aimed not just for more flops, but for an observable step change in its real-world workloads. “We measure [supercomputer capacity] not through petaflops, but through our weather forecast workloads; we set those as benchmarks in the procurement, that we expect people to deliver us [a] six-times volume increase in what we’re going to be able to produce,” Lawrence said.

The office also set its sights differently. “We’ve always had our supercomputer hosted in datacenters in Exeter at the Met Office, and we’ve always managed the integration with all the other parts that they need to talk to,” Lawrence said. But the times were changing, and the Met Office wanted more supercomputing capacity, uptime and resilience without turning itself into more of a supercomputing center. “Supercomputers are at the core of our business, but we don’t consider ourselves an HPC center,” Lawrence said. “We’re a weather forecaster, but we don’t do research into supercomputers just for supercomputing’s sake.”

Ushering in a new model of weather supercomputing

This next generation, as a result, will consist of completely managed HPC-as-a-service installations. “[Microsoft will] be providing us a supercomputer, all of the power for the supercomputer, the hosting for the supercomputer, and everything that’s supporting us in making use of that supercomputer as well,” Lawrence said. For the second generation, he elaborated, “we’ve built into the procurement a mechanism to allow us to analyze what’s available within the market and make sure that the refresh we get halfway through allows us to meet our performance goals and is proving to be good value for the money.”

They targeted a four-system setup this time to give the Met Office some wiggle room in its operations. “The reason why we’re [splitting] into four is to give us a bit more flexibility when we are wanting to patch supercomputers and have more flexibility when one of them develops a fault and we need to switch operations to run in a different … supercomputer,” Lawrence explained. 

For the first time, the four systems will also be hosted offsite – two each at two separate datacenters in the southern UK, adding further resilience. (While the exact sites have not been detailed, Microsoft currently operates the “UK South” Azure region from a site south of London.) Operating offsite means that the service will be able to comfortably run its systems 24/7 and that the datacenters can be powered with renewable energy – a priority for the climate-oriented office.

The active data archive, meanwhile – shared between the four systems – will ease the burden posed by the Met Office’s data production, a 200 TB/day load that it anticipates will increase fivefold with the advent of the first-generation upgrade and by a further two and a half times with the second generation.

A diagram of the planned new systems showing high-memory and “enhanced” nodes, along with various storage infrastructure. Image courtesy of Richard Lawrence.

An unprecedented investment

The Met Office presented its business case for the ten-year supercomputing plan in 2019, and in 2020 the office was approved for £1.2 billion in funding to enact the plan. “That’s a large investment – certainly the largest the Met Office has ever dealt with,” Lawrence said. “And the reason why we were successful in going out to the government and getting them to invest this amount is because we spent a large amount of time articulating the benefits, not just to the Met Office but to the wider UK.” 

The UK, Lawrence explained, requires that major procurements generate a certain amount of social value. The Met Office plans to provide this value not only through improved weather forecasting and climate modeling, but also through the provision of skills and training in the areas of the UK that are hosting the supercomputers. In total, the Met Office is targeting £13.7 billion in socioeconomic benefits from the £1.2 billion investment. The announcement of the system also coincides nicely with COP26, this year’s annual climate conference, which will be hosted by the UK.

To read more about the Microsoft-Met Office system, click here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire