How NASA Is Meeting the Big Data Challenge

By Tiffany Trader

April 7, 2014

As the scientific community pushes past petaflop into exascale territory, it is imperative that the tools to support ever-more data-intensive workloads keep pace. No where is this more true than at the storied NASA research complex. With 100 active missions supporting cutting-edge science, NASA knows more than most about compute- and data-driven challenges.

A recent paper from Piyush Mehrotra and L. Harper Pryor with NASA’s Advanced Supercomputing (NAS) Division sheds light on how NAS has assisted the diverse workflow of its users, including discovery, access, transportation, management, and dissemination of big data, as well as providing the tools to transform data into insight and knowledge.

“As NASA’s flagship site for computational science and engineering at scale, NAS supports a user base that is at the forefront of data intensive and data driven science,” write Mehrotra and Harper. “Our users’ codes use and generate very large datasets and analyzing these datasets to extract knowledge is a fundamental part of their workflows.”

To get a better understanding of the kinds of challenges faced by their user population, NAS officials went directly to their user base. They then grouped the challenges by the main elements of the workflows, ie “discovery of data and tools, access to and movement of data, storage and management of data, algorithms/tools for performing the analysis/analytics and finally dissemination of the results.”

Discovery hinges on data, which is challenging for NASA based on sheer volume and the distributed nature of the storage archives. Users require tools that support large-scale data movement. There is also the looming need to develop platforms that meet the computational and analytic requirements of the coming exascale era.

With user interviews and several studies to guide them, NAS officials added several initatives to their architecture roadmap. The paper’s authors describe two of these that address user needs:

1) higher level support for scientific workflows to make the challenges of working with big data and big compute more transparent to the user, and

2) tighter integration of compute engines with analytic engines.

The first of these directly relates to the implementation of the NASA Earth Exchange last year. The NASA Earth Exchange (NEX) is a collaborative research platform that brings together advanced supercomputing, earth system modeling, workflow management, and NASA remote-sensing data. It enables users to explore and analyze large earth science data sets, run and share modeling algorithms, collaborate on new or existing projects and share results. To support data-driven workflows, NEX uses VisTrails on Pleiades, NASA’s flagship supercomputer. ParaView is also available as a companino tool to VisTrails. The system will support wide-area workflows encompassing NASA and other agencines, including USGS, NOAA and DOE.

“Our vision is to provide an environment capable of capturing the workflow so that it can be shared with colleagues who can then repeat the experiment and/or tweak the input data/algorithms to generate new knowledge,” write the authors.

The second initiative aims to integrate analytic capability – more specifically visualization – with compute capability. This speeds up what was traditionally a sequential process. In the past, visualization was a post-processing activity that could only be performed after the computation phase. Now NASA’s visualization engine (hyperwall) has been integrated via the same InfiniBand fabric as the Pleiades supercomputer, so that they share storage resources in their Lustre filesystem. Data streams can be directed from computation nodes to the visualization nodes via the InfiniBand I/O fabric while the code is running. The intermediate data can be examined concurrently with execution (to steer computation) or stored for later analysis. This benefit is temporal fidelity at much lower storage cost.

Going forward, NAS aims to continue to optimize the data workflow and they use data knowledged to guide this process. “We don’t want to touch all of the data if we don’t have to,” the authors write. “We know a lot about the structure of the data that might be used to steer the computation toward the subsets of the data that are applicable to the query – and not use the subsets we know are not relevant. This is the good news side…the bad news side is that there is a lot of complexity hiding behind the data and this complexity is critical to using it properly.”

An example of this complexity is remote sensing of atmosphere and land temperatures from space. A satellite does not really measure temperature, it measures radiance, and getting this reading requires a lot of knowledge about the sensor itself. Or take a satellite that is nominally in a sun synchronous orbit, what if the orbit has drifted, they ask. With all this information and metadata being crucial for the discovery challenge, the task at hand is making it all more accessible to the user. A good place to start, according to the authors, is determining what approaches (representation, tools and algorithms) best support the orchestration of metadata. And as always, they emphasize the importance of “never los[ing] sight of the fact that our product is the scientific and engineering knowledge that we extract from big data.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them better through the miracle of video..... Team FAU/TUC is a c Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from SC17 chair Bernd Mohr, where he lauded the competition for Read more…

By Dan Olds

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them bet Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This