At Long Last, Supercomputing Helps to Map the Poles

By Oliver Peckham

August 22, 2019

“For years,” Paul Morin wrote[*], “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place and out-of-date information.” Now, after a decade of painstaking work, the time for apologies is over. A major collaboration between universities, the U.S. government and a software company has produced an unprecedentedly accurate map of the poles – and it was made possible by supercomputing.

Paul Morin. Image courtesy of the University of Minnesota.

Morin is the founder and director of the Polar Geospatial Center at the University of Minnesota, where he and dozens of other researchers help the National Science Foundation (NSF) map the Earth’s poles. Morin also liaises between the NSF and the National Geospatial-Intelligence Agency (NGA) and serves on the National Academy of Sciences’ Standing Committee on Antarctic Geographic Information. 

In short: if you’re interested in polar mapping, he’s your guy.

“It’s to serve places like this,” Morin said in a recent NSF-hosted webinar, pointing out a field camp in the dry valleys of Antarctica. “When we’re out there working, we’re sleeping in tents. […] As we were working, we didn’t have access to the kind of resources we have now. And so […] we flew around in helicopters, we had differential GPS, and we were geo-referencing air photography that was collected often in the 80’s, 90’s or the 00’s.”

Morin’s point is well-taken: for those working on or over the poles – not just researchers, but National Guard and Air Force servicemen as well – the accuracy of polar maps is a day-to-day, functional concern. (“I mean, this is the way that we get to work in the morning,” Morin said.)

The scope of the project was staggering. Antarctica is 15 million square miles – 50 percent larger than the contiguous U.S. “We can use all the standards superlatives – the highest, the driest, the coldest – but from my standpoint,” he said, “it’s just big.” But Antarctica, of course, is only one part of the equation. On the other end (quite literally): the Arctic, which is twice the size of the contiguous U.S.

Luckily, Earth-observing satellites tend to be in a polar orbit, constantly taking images of  the poles. The problem, then, became wrangling what Morin calls an “incredible fire hose of imagery” from NASA, the European Space Agency and commercial satellite operators. The imagery that the researchers were able to request allowed for pinpoint accuracy. “If you were to look at the ground in the valleys,” Morin said, “and if you were to put a single oak leaf in a specific location, you could detect the chlorophyll in that oak leaf in a 1.8 meter square pixel.”

But a single, detailed map wasn’t enough.

“You […] just don’t get the repeat that science would need, because the Earth’s surface is always changing,” Morin said of older surveying methods. “All these things – we want to be able to measure and see what the difference is.”

Then, five years ago, the U.S. gained the chairmanship of the Arctic Council and announced plans to create a robust elevation map of the Arctic. Morin and his colleagues realized that this was their opportunity to create an evolving topographic dataset for polar regions. The following year, President Obama announced a project with the NSF and the NGA to create that dataset for Alaska within one year and the Arctic within two.

With NGA’s satellite imagery contracts now at their fingertips, the newly formed team needed tools to process that massive amount of data. They turned to Ohio State University (OSU) and the National Center for Supercomputing Applications (NCSA) at the University of Illinois. OSU provided software that allowed the team to feed stereo imagery into an HPC system and receive a digital elevation model (DEM) with very little human intervention. The NCSA, of course, provided the firepower: Blue Waters, a hybrid Cray supercomputer that delivers roughly 13 petaflops, over 1.5 petabytes of memory and about 26 petabytes of storage. Over time, the team received allocations on Stampede2 and Frontier as well.

REMA’s coverage area. Image courtesy of the University of Minnesota.

They got to work. The team produced a five-meter resolution elevation model of Alaska, then refined it to two meters. Then the Arctic: 12 percent of the Earth mapped at a two-meter resolution. Then Antarctica – another 8 percent. They produced REMA (the Reference Elevation Model of Antarctica) and, later, ArcticDEM, a tool for extracting those two-meter Arctic DEMs from Blue Waters.

Morin walked through the particulars of how granular these maps could be – individual trees being logged, ice melt, lava flows. “We now have better topography for the ice on Earth than we do for the land on Earth,” Morin said. “There really isn’t anywhere else on the planet that we just have this much repeat.”

The project was a success, and NGA and NSF have extended their collaboration and their time on Blue Waters – this time, with the aim to extend the polar mapping project to the entire surface of the Earth.

“When we began this, we just didn’t have HPC experience,” said Morin. “Last time I touched HPC before this project, the computer was a Cray-2. We needed software like Swift and Parsl for sub-scheduling – we’re doing hundreds of thousands of jobs, huge networking and automation. The community just isn’t used to this – you know, the next version of the poles is probably two petabytes! […] These projects are too big for any one agency – we’re talking public, private, multiple agencies, civilian defense… we have to bring everybody to bear on projects this large.”

To Morin, though, this is clearly still just the beginning. Morin cites a project (“Planet”) that is launching hundreds of shoebox-sized satellites for geospatial mapping. “There’s so much data coming through there that we just can’t think of how we’re going to process it even now,” he said. Of course, he does have some ideas: he recalls another project (“Iceberg”) using machine learning algorithms to detect permafrost in the Arctic.

“So,” he says excitedly, “if we can keep throwing imagery at this…”

[*] Paul Morin’s talk, “The use of NSF HPC for the Production of the Earth’s Topography,” was held last week as part of the NSF Office of Advanced Cyberinfrastructure’s Cyberinfrastructure Webinar Series. To read more about the talk, click here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Lab Behind the Record-Setting GPU ‘Cloud Burst’ Joins [email protected]’s COVID-19 Effort

May 20, 2020

Last November, the Wisconsin IceCube Particle Astrophysics Center (WIPAC) set out to break some records with a moonshot project: over a couple of hours, they bought time on as many cloud GPUS as they could – 51,000 – Read more…

By Staff report

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is somethin Read more…

By John Russell

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This