One Small Step Toward Mars: One Giant Leap for Supercomputing

By Staff Writer

October 10, 2018

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. What we’ve lacked is a method for performing compute-intensive experiments in orbit without aid from the ground below – which is necessary in order to advance space exploration.

All that will change with the Spaceborne Computer experiment, a self-contained HPE supercomputer housed in a locker-like casing to be installed on the International Space Station (ISS) for a full year – roughly the same amount of time it will take humans to voyage to Mars.

We recently chatted with Dr. Eng Lim Goh, VP, Chief Technology Officer of SGI at Hewlett Packard Enterprise and principal investigator of this experiment, about the Spaceborne Computer, the challenges of supercomputing in space, and why this project represents an important step forward in the quest to set foot on Mars.

Tell me about the Spaceborne Computer. How did it get a ride on a SpaceX rocket to the International Space Station?

Dr. Goh: Customizing computers for space is a long process – these computers are multiple generations old by launch time, let alone after subsequent years of operational use. Meanwhile, I was working on the concept of giving Earth-based computers self-care intelligence so the company had started the patenting process for this. I then submitted a paper to NASA to conduct an orbital experiment of this concept. Through the dedication of our teams led by Dave Petersen and Dr. Mark Fernandez, we have the Spaceborne Computer today. Our goal is to achieve a functional supercomputer for spaceflight without spending years hardening systems by using off-the-shelf servers and custom-built software instead. We’ve had the honor of working closely with NASA for three decades, and it’s an added honor for us to partner on a project of this magnitude.

Why do we need a supercomputer in space?

Dr. Goh: We have always approached space exploration in steps. This launch represents the first step into the next frontier of space exploration – a mission to Mars.

Mars astronauts won’t have near-instant access to high performance computing like those in low-earth orbit do — on average, the red planet is 26 light minutes round-trip away. Imagine waiting that long to get critical answers during a system failure; that simply isn’t an option. Having a supercomputer on board the spacecraft will allow our interplanetary explorers to meet some of these challenges in real time — whether it be on-the-spot processing power for scalable simulation, analytics or artificial intelligence. But first we need to figure out how to make an off-the-shelf supercomputer function correctly in orbit. That’s what we aim to research through this year-long experiment.

Think about how we got to the moon. First there were the Mercury missions. Then Gemini. Then Apollo, and even at that stage it still took 11 missions before we landed Neil Armstrong and Buzz Aldrin on the lunar surface.

I like to think of the Spaceborne Computer project as the Mercury stage of the computer science research that will drive the mission to Mars. Earlier this year, my colleague, Kirk Bresniker discussed the computational challenges of the Mission to Mars and the need for a major architectural upgrade before we can realistically complete the journey, and Hewlett Packard Enterprise has the answer in Memory-Driven Computing.

Memory-Driven Computing will help us efficiently and effectively tackle the big data challenges of our day, and make it possible for us to—one day—send humans to Mars. But, even if we expect Memory-Driven Computing to become the standard for supercomputing in space we need to start somewhere.

Even the world’s fastest supercomputers are due for upgrades from time to time. How will you upgrade the Spaceborne Computer if it’s to be permanently based on the ISS?

Dr. Goh: We specifically decided not to turn the astronauts into systems engineers. The Spaceborne Computer is housed in an HPE-designed and NASA-approved locker that’s entirely self-contained and attached with NASA-approved bolts. Other than that, we include standard Ethernet cables, standard 110 volt AC connectors and NASA-approved water cooling technology for keeping the system from overheating. We literally use the chill of space to pull heat out of the Spaceborne Computer! Cool, right? Just as cool is the fact that our systems are totally powered by solar cells.

And because the package is self-contained and a single part number, we can put many types of compute in it that we, astronauts or scientists need. We can send a new locker up on a future mission and take the old one back. Nowhere in that process do we ask the astronauts to adjust or tune servers or otherwise become familiar with computer science.

What are the benefits to HPE and its customers?

The value of this project is two-fold. First is for our Earth-based customers. From NASA’s certification process we learned that HPE computers are already robust and reliable. Also, what we learn from this year-long experiment may also be applied across our product lines to benefit our customers.

Second is for our space bound customers. If this or a subsequent experiment produces successful results, they can carry with them and use the latest and most powerful off-the-shelf computers. In terms of market, it may not be that small if commercial space travel grew the same way air travel did.

What’s your dream for supercomputing in space? What’s the next step?

Dr. Goh: I have a dramatized vision of how we can increase our ability to conduct relevant, real-time experiments by orders of magnitude – by arming astronauts with portable, functional data centers on their missions. Imagine this: before her interplanetary mission, an astronaut calls up HPE and orders the latest high performance computer. It arrives pre-loaded with our custom software. She then loads her mission software, brings it aboard and launches with the latest highest performing computer system available. That is, instead of spending time customizing hardware, we just load customized software on off-the-shelf hardware. Imagine being able to Harden a computer with Software; it has somewhat of a poetic ring to it too.

Return to Solution Channel Homepage
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and su Read more…

By Rob Johnson

How Fast is Your Rubik Solver; This One’s Probably Faster

July 18, 2019

In the race to solve Rubik’s Cube, the time-to-finish keeps shrinking. This year Philipp Weyer from Germany won the 10th World Cube Association (WCA) Championship held in Melbourne, Australia, with a 6.74-second perfo Read more…

By John Russell

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose Read more…

By John Russell

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This