Spaceborne Computer-2 Makes HPE’s Case for Edge Processing

By Oliver Peckham

September 2, 2021

Following a February launch, HPE’s second Spaceborne Computer (SBC-2) has been circling Earth on the International Space Station for some six months. The first Spaceborne Computer had returned to Earth around 20 months prior – and even before then, HPE had been hard at work distinguishing the sequel from its predecessor.

HPE’s Mark Fernandez in a mock-up of the International Space Station at the HPE Discover conference.

Previously…

“In retrospect, Spaceborne-1 is a proof of concept,” said Mark Fernandez, HPE’s principal investigator for SBC-2, in an interview with HPCwire. NASA had, he explained, initially tasked HPE with three major questions for spaceborne computing: first, can you take off-the-shelf components and put them into space? Second, can they survive the launch and be installed by non-IT experts? Third, can they actually function in space – and if so, for how long?

That mission – optimistically planned for a year – instead lasted 1.8 years thanks to smart planning and software-driven “hardening” that slowed the system’s operations when parameters indicated that dangerous space weather might impact the hardware. In that time, Fernandez said, the system ran over 50,000 software experiments, with every last one returning the correct result.

Understandably, this had NASA excited about a successor. “Before we returned to Earth, NASA asked us if we could do it again,” Fernandez said. “But they had some major changes.”

Planning a second sojourn

Three big bullet points distinguished the SBC-2 mission from the SBC-1 mission. This time, HPE had no requirements for the system, instead solely using the workloads to serve the space and Earth observation communities. NASA, for its part, asked that HPE send up twice the hardware for redundancy – and, ambitiously, that the mission last two to three years: the estimated length of the first missions to and from Mars.

SBC-2. Image courtesy of HPE.

In the end, they sent up a matching set of HPE Edgeline Converged EL4000 Edge and HPE ProLiant systems, each including an AI-focused node with an Nvidia T4 GPU. “We’re now sending up more than twice the cores, and they’re faster,” Fernandez said, explaining that the new hardware delivered more than two Linpack teraflops – over twice that of its predecessor. “All the hardware is stock,” he added. “All the software is Red Hat 7.8, unmodified.”

Emboldened by the success of SBC-1, HPE was also more comfortable changing up the software on the systems. “This is unlike previous space missions where things are locked and loaded prior to launch and ‘thou shalt not change it,’” Fernandez said. And keeping the software stock, he said, meant that developers on the ground could easily workshop software for use on SBC-2.

On the edge of space

All of this software flexibility opened up new possibilities for how the system could handle computing at one of humanity’s most extreme edges. “During Spaceborne-1, the most often-asked question was, ‘if you could just gzip up my data, that would really, really help!’” Fernandez said. During SBC-1’s tenure, they would do just that – but when the same requests started popping up for SBC-2, the situation was different.

By way of example, Fernandez mentioned a partner (“who shall remain nameless”) that wanted to work with a 180GB dataset generated on the space station. Gzip got the size down to around 18GB – a respectable 90 percent compression. “They were super, super excited. Instead of taking 12 hours to download, they said, ‘I might be able to get this in an hour or so!’” Fernandez recounted.

But then, he asked what they planned on doing with the dataset. Upon learning that they just wanted to run an industry-standard software package on it, he asked the partner: “Well, do you want me to run that for you?”

“And that’s when the lightbulb went off,” he said. After just a few minutes of CPU/GPU processing, he explained, the work was done. “Without touching their code, I ran it, and we ended up with 20,000x reduction … and I’m able to download it now in two seconds.”

That sort of experience hasn’t been an isolated incident with the advent of SBC-2. Fernandez said that Microsoft, as well, had partnered with HPE to process human genome data at the edge, resulting in a 10,000-fold reduction in the size of the download.

Enabling space-based research

SBC-2 is hosting dozens of experiments, with some already completed. Many of these, Fernandez explained, related to operations within the ISS, such as health-related studies of the astronauts’ vitals and genomes, life sciences work on the plants aboard the station and preliminary work on autonomous mission operations. (Fernandez said that this latter area of research is targeted at longer missions, with NASA aiming to ameliorate the long-term psychological impacts of rote tasks on spacecraft.)

Outside of the walls of the ISS, SBC-2 experiments are honing in on a range of feature extraction tasks. Researchers, Fernandez said, are aiming to avoid streaming data-intensive ultra-HD video in its entirety by processing the video on SBC-2 and identifying events like wildfires, lightning strikes and illegal fishing vessels before transferring the results to Earth.

Other use cases include work on satellite algorithms and encryption and an educational outreach program that allows students to compete to have their code run on SBC-2. Fernandez also mentioned the possibility of using SBC-2 to test the fundamentals of swarm-based satellite processing, which would allow the use of lighter-weight, lower-power satellites that all communicated with a spaceborne computer.

Looking to the stars

“I was briefing some people this morning about Spaceborne-3,” Fernandez said. “And it may be 3A, 3B, 3C, because there [are] multiple desires for the functionality for Spaceborne-3.” These desires, he said, included the use of spaceborne computers to help upgrade the station’s antiquated IT infrastructure and working to expand the systems’ storage capacity and capabilities.

Furthermore, work remains to be done in the automation and fine-tuning of the spaceborne computers’ software-based hardening against the ravages of space. HPE’s eventual aim remains to apply machine learning to this task, but, Fernandez said, data collection is still ongoing. Whether that functionality falls within the tenure of SBC-2 or SBC-3, he said, depended entirely on how many anomalies SBC-2 encountered. “We’ve had a few,” he said, “but not enough yet to have a confident, well-trained model.”

Broadly, HPE is continuing to pitch the spaceborne computers as an extreme case study of a basic proposition. “If you can do it in space, you can do it anywhere,” Fernandez quipped. “We can compute faster than we can download. This applies at the edge wherever you are – whether you’re on an oil rig, or you’re on an aircraft, or you’re in a cell phone tower.”

To learn more about HPE’s Spaceborne Computers, click here and here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire