“Imagine: you’re an alien and you know almost nothing about Earth, and you land on seven or eight points on Earth and drive a few hundred kilometers. Does that alien species know enough about Earth?” asked Masahiro Ono in an interview with the Texas Advanced Computing Center (TACC). As group lead of the NASA Jet Propulsion Laboratory’s Robotic Surface Mobility Group, Ono and his colleagues are preparing for a new, smarter generation of Mars rovers – and they’re using supercomputing to do it.
The Mars rover with the widest overall reach, Opportunity, took 15 years to travel 28 miles. “If we want to represent the huge diversity of Mars,” Ono said, “we’ll need more measurements on the ground, and the key is substantially extended distance, hopefully covering thousands of miles.” Just a couple of problems: solar-powered rovers are on a strict energy budget, Mars’ rocky terrain is unforgiving to vehicles, and even the most recent rover – launched just a few weeks ago – uses radiation-resistant, but computationally weak CPUs called RAD750s.
NASA copes with the limited on-board computing by doing most of the big thinking back on Earth. “All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop,” said Chris Mattmann, deputy chief technology and innovation officer at the Jet Propulsion Laboratory. NASA is evolving this approach by using supercomputing resources on Earth to develop advanced deep learning models that are prepared to cope with any situation the rover might encounter when on Mars – an approach Ono calls “commanding by policy.”
“We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that,” Ono said. “We’ll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we’ll use the increased power of the rover to decompress the policy and execute it.”
This allows the rover to adapt, rather than simply responding to the commands of a human operator 50 million miles away. “The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options,” Ono said. “This is important in case something bad happens or it finds something interesting.”
As part of the Machine Learning-Based Analytics for Autonomous Rover Systems (MAARS) program, NASA has also adapted a Google-developed neural image caption generator called “Show and Tell” for use in rover missions. The adapted algorithm is able to assess an image, caption it with descriptive information (e.g. the type and location of various obstacles), allowing the humans back on Earth to more easily prioritize targets or conduct pathing. The algorithm was trained on Maverick2, a supercomputer at TACC that uses a suite of Nvidia GPU-based nodes to specialize in machine learning workloads.
NASA is also turning its eyes toward on-board high-performance computing for rovers – which opens even more doors. NASA is working to design new radiation-resistant, high-performance, multi-core processors through the High Performance Spaceflight Computing (HPSC) project, and the Qualcomm Snapdragon processor is also undergoing testing for use in space. “One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board,” Mattman said. “What are the killer apps given that new computing environment?”
“We’d like future rovers to have a human-like ability to see and understand terrain,” Ono said. “For rovers, energy is very important. There’s no paved highway on Mars. The drivability varies substantially based on the terrain — for instance beach versus bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that’s the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we’re going to need to change the paradigm a little bit.”
To read the reporting on this research by TACC’s Aaron Dubrow, click here.