Helping Experimental Scientists Take Supercomputers to the Max

By Doug Black, Contributing Writer

December 30, 2014

Doug Baxter is a capability lead for the Molecular Science Computing Facility in the Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory. He and his team are responsible for the software side of the operation, and they help experimental scientists get the most out of EMSL’s supercomputing resources. The facility is the home of Cascade, which is ranked number 18 on the current TOP500 list of the world’s most powerful supercomputers.

In this interview, we talk with Doug about EMSL’s Cascade supercomputer, the NWChem software package, and code modernization.

doug-baxter-headshotHPCwire: Can you give me a sense of what your daily role is at EMSL?

Baxter: I mainly manage the allocation of resources on our Cascade supercomputer and help our users get up and running on it successfully. Once they get their application running, I have a team of computational biologists, computational chemists and other computer scientists to help them address performance and efficiency.

We’re a national scientific user facility, and we help users from all over the nation working on scientific applications relevant to DOE’s Office of Biological and Environmental Research (BER). They are focused on predictive understanding for biological processes, subsurface flow, contaminants and clean-up, climate modeling, and aerosols.

One thing that makes EMSL special is the combination of our experimental instruments and our high performance computing that provides a theory side to the experimental aspect of science.

HPCwire: Is the work that you’re doing primarily in support of applications running on Cascade?

Baxter: It is primarily in support of Cascade and the corresponding archive system, which is shared with our institutional computing facility. We devote part of our time to outside research projects, including other supercomputing efforts here at the laboratory. We also have an institutional supercomputer, Olympus, and its successor, Constance.

HPCwire: How much of the workload on Cascade is NWChem?

Baxter: NWChem comprises 30 to 40 percent of the workload on Cascade. We keep statistics on what we run on the machine and we’re starting to see an increase in our climate modeling codes, our subsurface flow modeling codes as well as some of our computational biology codes as we have new projects in BER’s areas of interest. But as we support BER’s mission we expect that the computational chemistry pieces will continue to remain a large player.

HPCwire: You commented on climate modeling. Is this a lot of proprietary code?

Baxter: These are mostly codes that come out of NOAA and so they are publicly available codes, including the Weather Research and Forecasting (WRF) model. DOE research heavily utilizes the Community Earth System Model (CESM) and its land model component, the Community Land Model (CLM), both also publicly available. We do a lot of aerosol modeling and that gets down into molecular chemistry level and we’re back into computational chemistry again.

HPCwire: Are many of those codes you’re referring to, other than NWChem, developed for parallel systems?

Baxter: The climate codes and the subsurface flow codes are developed for parallel systems. Parallel systems have been developing over the years as well, so a lot of them have a long history of parallel computing. But that doesn’t necessarily mean they’re set to move on to the next stage of computing, to move on to grander scales of parallelism. That requires some rethinking of the way we’ve traditionally done things in the past.

PNNL Cascade-specsHPCwire: On that topic of code modernization or code optimization, what does that mean for you and your team as you prepare some of the codes for the many core architectures?

Baxter: Before you get to the parallel stage, you have to start with a good serial code. So some of what we do in preparing code to run at scale is going back to the mathematics and saying, ‘what exactly do we want to solve here?’ Sometimes we have to think differently, in a fundamental way about various things. Traditionally, in parallel computing, the assumption has been, ‘I have a fixed number of resources for the duration of the task that I’m executing and I have a lot of things to do. I partition the tasks that I want done among the players that I have, but my expectation is that all those players play for the full duration of the job and they synchronize with each other.’

One of the difficulties moving forward into exa-scale is global synchronization. As we increase to hundreds of thousands of processes or possibly millions of processes, synchronization becomes untenable. So we must think about things in a non-global-participation way. That requires a fair amount of effort because you need to think differently algorithmically about traditional computing. It used to be that FLOPS were very expensive and memory accesses were relatively inexpensive. So people spent a lot of time saving the results of computation so that they didn’t have to recompute them, doing the expensive part again. Now we’ve got a lot more FLOPS than we have memory accesses, so you can compute much faster than you can move data. That shifts the emphasis – it is sometimes cheaper to recompute a result than have it stored and read back in. So as we move toward greater predictive understanding of the processes that our sponsor is interested in, that requires higher resolution in our models—that means more data points, that means bigger problems to solve. We need more processors to work on problems, but we also have to think about solving them in different ways.

HPCwire: How important is the role of coprocessors moving forward?

Baxter: They are important. They’re really fast at computing but keeping them busy is a challenge, and one of the requirements is the ability to move data to them asynchronously, in a one-sided fashion, which is becoming more prevalent. The Message Passing Interface (MPI) standard is the distributed memory programming paradigm. The MPI 2, and the MPI 3 standards have included one-sided communication protocols where a processor can move data into another processor’s memory space without taxing that processor.

HPCwire: How do you train your staff to get into the right frame of mind?

Baxter: A good starting place is the Jeffers/Reinders book (Intel Xeon Phi Coprocessor High Performance Programming by Jim Jeffers and James Reinders).
Fundamentally, it’s about starting with good serial code and then managing message passing in general. We also spend some effort developing methodologies that work with MPI. One thing I find useful is experimenting with the Intel Symmetric Communication Interface. It’s used to support MPI on the coprocessor. One of the basic ways to use the Xeon Phi coprocessors is to run MPI ranks on each of them so you use the same standard model programming. The difficulty with that is you can’t use all the Xeon Phi coprocessors on more than a handful of nodes because the MPI implementation layer is too memory-intensive. But the SCIF API exposes the communication calls, which allows us to go in and play with that in different ways.

Aside from assisting our various supercomputer users we also have some outside research interests that help improve our ability to help our users. And so part of what I do is work on ways to use those accelerators generically and then push that out to people doing development.

HPCwire: The theme for SC14 this year was “HPC Matters” and for 2015, the theme is “HPC Transforms.” In your own words, why do you think HPC matters?

Baxter: HPC does matter and it continually matters more. We use HPC to solve larger modeling problems, which are designed to help us get what we call predictive understanding of models and processes – such as the flow of radioactive ions leaking from waste tanks. We want to understand how to remediate that problem. Some of what we do is simulate the bacteria that can actually reduce those ions so they precipitate out of solution, making them non-mobile. Some of it is flow analysis of the water table and the surrounding elements to understand if there is a risk of radiation reaching a water source. Some of the modeling that we do is climate modeling to understand aerosols and effects of man-generated pollution on the radiated energy balance and what that is doing to our environment. And those models require lots of data and lots of computation, but they help us understand processes. For something like energy storage, understanding the process helps us control, modify, and improve efficiency.

The other important part of HPC is the predictive modeling. It’s frequently much less expensive to model things and arrive at possibilities for testing experimentally than it is to build many different physical test models. If our model is accurate and looks like a promising way to go, it helps narrow down the breadth of possible solutions in terms of exploring and developing mechanistic, chemical, and biological solutions to technical problems.

HPCwire: Of the work that your team does here, what are you most proud of?

Baxter: At any given time we have about 60 different proposals using our supercomputer. We are proud of the impact we have on science, on our ability to provide a production environment for our users, and our more recent success of transitioning from our previous supercomputer to a new one. It involved planning and experimenting with the old system before we moved it to the new one, and getting the software pieces in order and ready to run. It’s a challenging process, but that was perhaps one of the smoothest transitions we’ve had. We opened the machine to first users on the 6th of December (2013). By the 1st of January all of our users had been ported, all of their codes were running on the new system. So our most significant broad achievement is the transition from a five-year-old supercomputer to the 18th fastest in the world. Migrating users in less than a month is pretty impressive.

HPCwire: Do you have examples of some of the things you can do with Cascade that you couldn’t do with your previous supercomputer, Chinook?

Baxter: One example is in the NWChem arena where we have been able to increase the scale of the problems our users are able to tackle with Cascade.

In terms of peak performance of the machine, our previous machine, Chinook was a 160 teraflop machine; our current machine Cascade is a 3.5 petaflop machine. So that’s more than a 20X improvement in terms of peak performance. Without the accelerators, our expectation was that Cascade would run about three times faster than Chinook. What we found was that it ran four to six times faster.

Getting into the accelerators is a challenge. It takes some effort and rewriting of some code. That’s one of the things we need to get the community to understand is that it’s more than just plunking a machine down on the floor. It takes a software development effort to make these things go. But the return can be worth it. We’ve had some success in getting improved speedups with the accelerators. Linpack-wise, we measure a 2.5 petaflop performance out of the peak 3.4 petaflops. That’s an achievement. We don’t run Linpack on the machine but it’s a measure of the machine’s capacity. Then we work on bringing that kind of improvement to our other codes.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GDPR’s Impact on Scientific Research Uncertain

May 24, 2018

Amid the angst over preparations—or lack thereof—for new European Union data protections entering into force at week’s end is the equally worrisome issue of the rules’ impact on scientific research. Among the Read more…

By George Leopold

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been emerging from stealth over the last year and a half, is unveili Read more…

By Tiffany Trader

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been eme Read more…

By Tiffany Trader

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This