ADIOS: Providing A Framework for Scientific Data on Exascale Systems

May 30, 2018

Editor’s Note: Hardware development associated with the U.S. Exascale Computing Initiative receives the lion’s share of attention but software is at least as important. Presented here is an interview with the team working on Adaptable I/O System (ADIOS) effort as part of the Exascale Computing Project (ECP) overseeing software development and posted today on the ECP site. ADIOS tackles critical data management challenges.

The Adaptable I/O System (ADIOS) project in the ECP supports exascale applications by addressing their data management and in situ analysis needs. Led by Scott Klasky of Oak Ridge National Laboratory, ADIOS is optimizing I/O on exascale architectures and making itself easily maintainable, sustainable, and extensible, while ensuring its performance and scalability. Klasky and some of the ADIOS team members joined ECP Communications on February 6 at the ECP 2nd Annual Meeting in Knoxville, Tennessee, for a podcast episode discussion. This is an edited transcript.

What is the high-level description of your project? 

Think of ADIOS as a framework to be able to put computation in the proper place at the proper time in a data-rich environment. It provides a novel way of thinking about I/O and extreme-scale data management. And essentially it allows scientists to describe their data and talk about how they would like to use it. They don’t have to worry about different things like file formats and storage technology, so really think about it as a very simple way to get extreme-performance I/O.

You’re working with over a dozen different ECP technologies with the Fusion Whole Device Modeling [WDM] Application. Could you please clarify and tell us more about this, how this could relate to other technologies and applications?

ECP’s a very exciting project because what we’re doing is we’re talking about how we bring all these different pieces of technology together. And it’s a very important part of ECP because there’s never one single solution to everything. So one of the things that we learned in so many years of doing science, for myself, being at first a general relativist to then going to being a computer scientist, we learned that for applications—I just want to provide an easy way to this technology. I want to do my science. I don’t want to be bothered. Basically, can we actually do something simple? I/O should be simple. I want to just open. I want to write. I want to read.

For the WDM project, what we wanted to be able to do is take two codes developed by two separate teams and basically not change much of the code. Basically, you just read their file, and then make it work. Physicists can work easily with files—read and write. And then it says, well, can we make that run in situ, in memory. Don’t change your code, now it runs. Then these codes produce a lot of data. There’s ECP technology to reduce. We work with projects such as EZ, which has an SZ compression mechanism. We work with ZFP. We have another technology, MGARD, that comes out from the CODAR [Co-Design Center for Online Data Analysis and Reduction] co-design project, so think about now, when they’re reading, when they’re writing, they don’t care. They just specify reduction, then different variables are reduced. Now they want to visualize. Don’t change your code, just run a visualization service. Everything occurs in memory. Get performance turned on. Get things from TAU. So now all of a sudden, we get this.

Using technology such as DataSpaces, such as EVPath, they can just have these technologies, but for them, they’re just looking like they’re opening, reading, writing a file. And now all this real-time monitoring of the codes, the coupling, they can do their physics without being burdened by this, and they can do this in a reliable fashion. And the point is we’ve learned a lot of things about this along the way, and what we’re finding is that, yes, we have to make things more resilient; yes, we have to make things work better. But the point along the way is that they just want an easy way in, and then they can use all these separate technologies, and they can have a big win by doing this.

ADIOS project team at the ECP 2nd Annual Meeting, Knoxville, Tennessee, February 2018. From left, John Wu, Lawrence Berkeley National Laboratory; Scott Klasky, Oak Ridge National Laboratory (ORNL); Greg Eisenhauer, Georgia Tech; Norbert Podhorszki, ORNL; Qing Liu, New Jersey Institute of Technology; Chuck Atkins, Kitware; and Ruonan Wang, ORNL. Not pictured: Matthew Wolf, ORNL; and Manish Parashar, Rutgers University.

Are there certain areas of this project that you think would be especially good to elucidate, to have further insight about so that people just get a better understanding of what ADIOS is all about?

Absolutely, and what I would like to do is call on one of my colleagues, Norbert Podhorszki, who’s an expert in this area because the important thing with ECP is this is a team. It’s a team that’s built with people around the world. Norbert can now elucidate on this.

Podhorszki: Yes, thank you, Scott. So if I want to summarize in two sentences what’s all about that Scott described about these working together with so many projects is that ADIOS allows the scientist to think about the data and how they can extract the science, the knowledge from it and in an integrated way so that they are not distracted with the details of the technology. What I mean is that they can describe the data and the intent—their intent with the data in some high level. And then ADIOS is the framework that brings together all the mechanisms and the services to execute that intent in an efficient manner in an automated way.

Why is this area of research important to the overall efforts to build a capable exascale ecosystem, Scott?

That’s an excellent question, I’m going to have my good friend and colleague, Greg Eisenhauer from Georgia Tech, answer that.

Eisenhauer: I think to answer that, effectively managing large volumes of data is a key challenge that can limit the science impact of exascale. ADIOS fundamentally addresses this challenge in several ways. It is designed as a service-oriented architecture that can easily and effectively be leveraged by applications. It also enables the use of self-describing data using different file formats which are hidden from the user but is optimized depending on the patterns of the code and the data access.

A particularly key aspect of ADIOS is it allows a separation of intent from mechanisms. We want users to describe what they want to do, and ADIOS ensures that it’s efficiently implemented under the sheets. In this way, ADIOS provides an easy way for scientists to leverage state-of-the-art technologies and solutions without compromising the integrity and the stability of their code because they don’t have to change it. For example, in ADIOS, scientists only have to think about reading and writing files, and they can seamlessly leverage this code in situations that involve synchronous and asynchronous in situ coupling, data reduction, indexing, different file formats, all sorts of different technologies.

ADIOS has been around for a long time, for many years. What’s the significance of ECP to ADIOS?

 Well, again, another excellent question. I’ll say that efficient and effective data management is critical at all scales. All science is about data, and some of the challenges really become more pronounced at the exascale. So it’s really tricky to answer about some of this because we’re very passionate about this, and our view is that we’ve done a lot of research and development, but if there’s no funding in research and development, of course we can’t do this. So we do need a mechanism like anyone else, but as a scientist, I’ll say we have a passion, so we’re going to do this, but exascale really gives us this whole thing about community. And what I’ll say is that we’ve worked with dozens of students all over the world. I’ve traveled around the world talking about ADIOS, getting people involved from all these different countries, getting this passion of what we have to data, saying that we have to make it because data is the important commodity for computing. We can’t do science without it. So without ECP funding, a lot of this would have been more difficult in so many different aspects.

I’ll say one of the most important things for us is taking a lot of the research that we’ve done, that we have software we have running, but we had to make it more stable. So we have Kitware involved, where what they’re doing is using their expertise that they’ve done in their company for things like VTK and applying that to ADIOS, making it so that we have a much more sustainable infrastructure, working with brilliant researchers at, say, Rutgers that we have that can really think about, again, their research artifacts and making that hardened. So I think ECP is making it so that a lot of things that they kind of sort of work, they work normally, we can make those hardened. And other things which work really well we can make work for the newer types of technology that maybe we wouldn’t be able to do as well if we didn’t have the funding to do this.

Why was this research area selected for exascale?

You know, I’m really biased here. Science, as I said, is all about the data. If you can’t efficiently process, move, run, given all the different types of complexities that are being thrown at users in exascale, then there’s no science. Without this form of research, I don’t think there would be any science coming out. We have to really provide a capable software ecosystem to be able to handle extreme-scale data on these large-scale exascale platforms.

You are obviously passionate and your team is obviously passionate about this work. What are your accomplishments at this point that you’d really like to play up for us, really highlight as things that you’re particularly proud of?

That’s a really good question. If you remember, one of the questions that you asked me about the code coupling, we’re really proud of this. And the reason why we’re proud of this is because there were probably about 35 different scientists who’ve contributed different aspects to make it so that the physicists can actually get their science done. Those guys, the physicists, didn’t have to care about each individual technology. We’ve got that. We’re working on a science article on that for Science magazine. My good friend, C. S. Chang, is actually leading that along with the leader of the project, Amitava Bhattacharjee.

And again, it’s really motivating. I talk to other applications here. For instance, Mark Taylor leads the ECP climate community project, and when we can get them more performance, they can write out more data, they can process the data quickly, we can provide more hooks into more ECP software, so then better science can be enabled. So when we think about that, we say it’s really good. And then when we think about what is our task? We’re making software, software has bugs, so now we have to work with, again, really good software engineers like Chuck Atkins at Kitware, who can really make sure that we can make this stable so that if any one of the software technologies crashes, their physics runs can still happen. We can then have other aspects of where we can just have that crash, bring that back up. One of our postdocs, Jason Wang, has a new type of staging technology so that, again, we can bring back these sorts of services, even if they crash. That’s going to be done in many of our technologies along the way: resiliency—but making sure that the science is enabled without making the applications overburdened by the technologies.

You’ve already mentioned this some, talking about the benefits of working with other experts. Can you speak more to that, your working relationships, the ones that have resulted from your ECP collaboration?

I’ll be brief about this and say we’re leveraging a lot of wonderful research that was enabled by ASCR [the US Department of Energy’s Advanced Scientific Computing Research program in the Office of Science], and program managers such as Lucy Nowell, and other program managers such as Randall Laviolette and Ceren Susut. Now we are bringing all this research together and making this a sustainable infrastructure under ECP. We aim at building long-lasting relationships with other applications and other software technologies in ECP collaborations.

What’s next for the ADIOS project?

Everything is about performance, performance, performance. It’s ECP. So for us, performance, but reliability along the way. So just to say we are working to have more applications that can then stress other features inside of our software, making it so that we can build a community and a software ecosystem so that applications can have a very easy time with all the new challenges from exascale and beyond.

Any final comments before we wrap up the discussion today?

Yes. I’d like to thank ECP and the entire program team, along with all the facilities that we run on. And I’d like to thank everyone listening, and including you, Scott, for spending time with me today.

Link to ECP article: https://www.exascaleproject.org/adios-providing-a-framework-for-scientific-data-on-exascale-systems/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This