DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

By Tiffany Trader

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing.

Dabbar serves as the Department’s principal advisor on fundamental scientific research in high energy and nuclear physics; advanced computing; fusion; biological and environmental research; and has direct management over many of DOE’s national labs that run data-intensive experiments.

HPCwire: We’re here at SC18, which marks the 30th anniversary of the Supercomputing Conference since it started in 1988, with Under Secretary for Science Paul Dabbar — what prompted this level of involvement and participation?

Paul Dabbar: Supercomputing is at the high end of our focus at the Department of Energy in terms of asking for and getting increased dollars to invest. Since the leadership team’s been in place here for a couple of years, our budget for advanced computing is up 45 percent. The whole Office of Science is up almost 25 percent, and there’s a broad theme of investment in the sciences. As a part of that very broad-based increase in construction on all sorts of of user facilities, whether it’s in high-energy particle physics, nuclear physics, genomics and biology and so on, a common theme is that we are building capabilities in which for us to optimize, and frankly for the amount of data that we’re going to be creating across the whole of the science complex for us to be able to get great use of that. And not only for us, but the work that we do internationally with CERN and all the other big science user facilities. For us to innovate and identify the problems in the universe to go attack requires increasingly higher levels of of computing power.

As we looked at what’s a common thread of facilities and capabilities that we need across all of those particular science areas, HPC and the whole computing area is a common thread for us to optimize across everything. So that’s been our big push. Obviously it’s been based on the backs of a number of people here at the department who have been working on this for decades and we were lucky as a leadership team to come in and for that groundwork to be there for us to accelerate the growth of building out the groundwork of those capabilities.

HPCwire: You spoke at the Top500 panel on Tuesday night, which discusses major trends in leadership computing, and stated this is a critical and exciting time for science and supercomputing, what’s behind that statement?

Dabbar: We’ve seen from a broader science point of view we’re at the crux of a number of particularly exciting areas in science. I think if we could apply the right amount of capital and the right brain power we can make some really material moves in science across the world. The world is at the cusp of making dramatic moves in artificial intelligence and machine learning, quantum information science, space exploration, advanced and sustainable energy, advanced mobility and genomics. So when you think about the areas of the sciences broadly speaking, we are very excited about those opportunities.

We are very blessed that there is a consensus that increased investment is needed and as I mentioned the Office of Science is up almost 25 percent this year in terms of spending but the National Institutes of Health are up about 20 percent, the National Science Foundation is up about 10 percent. There’s been broad support for increased federal dollars against it because the opportunity is there and people see that this is a place for driving the country and driving the world and innovation. How does HPC factor into that? Clearly when you think about all those different sciences, data and optimizing data is a big part of each and every one of those. So once again it’s a common thread across all the sciences and so it’s very important for us. It’s also important for us economically for the United States. We know that effectively we are the pointy end of a spear. We are the seed money that moves things along. So us as not only as a basic researcher in many areas associated with materials and characterization that are applicable to microelectronics.

We know also that we are the high-end consumer of the products. We helped drive the industry. We drive it for our own purposes, effectively from a science and research point of view. But we know as we seed the whole industry to develop HPC to the next cutting-edge point we know that we are seeding the whole country and a whole of the industry that has applications far beyond our particular area.

HPCwire: The biannual list of the world’s fastest computers was just announced with DOE labs Oak Ridge and Livermore operating the top two systems Summit and Sierra, and five of the top ten machines. What is the significance of having these very powerful computers within a global context, where we see other nations are also making significant advances in computing?

Dabbar: We focus a little bit less on exact rankings versus capabilities. Clearly being at the front end of capabilities means that everything I’ve been discussing regarding all of our needs means that we have the capability of basically leading and using all of the data that we create across the science complex. That’s the first and foremost aspect. I do think there’s an important secondary aspect to it which is at the end of the day, when we build large user facilities we are a beacon to the science world. And one of the things that is really unique about the United States and how we run the science complex and National Labs is that we are an open transnational, merit-based, proposal-based use of our facilities, and whether it’s a light source or it’s a computing facility, people propose based on merit from anywhere in the world. The U.S. is only five percent of the world’s population and we have a great history of having people coming to this country, including vast amounts of our lab complex, from our friends all around the world who are very open to open access and cooperative access to science and research. So having the best in the world draws the best in the world from here in the United States from all over the world to drive science and research are in the United States. I think that’s an important aspect to us having these capabilities.

HPCwire: With CORAL-2, the second procurement project for the Collaboration of Oak Ridge Argonne and Livermore, the U.S. has declared its intention to spend up to $1.8 billion on two or potentially three exascale supercomputers. What is the status of this project?

Dabbar: Clearly as we look at CORAL-2, that’s the next solicitation of the two machines [at Oak Ridge and Livermore] and possibly an upgrade at Argonne [editor’s note: i.e., an upgrade to Aurora 2021]. So that process is moving along well. We’ve received proposals and we’re looking at finalizing our decision. We are going to be making a decision and we are going to be moving forward. The exact architecture and the exact suppliers have not been finalized yet but we are getting very close on that as we took proposals on that several months ago.

Clearly we have the dollars as I was commenting from a budget point of view for us to go and execute and we received great proposals so we have no problems with that. We are very much heading down the road of securing those.

HPCwire: What is the role of the government in funding these large projects?

Dabbar: I spend a lot of time with our partners who help us with our labs or work on various other kinds of grant programs at MIT and Caltech and Stanford and Princeton. The reality is when you think about having very large user facilities that can have broad based access with all sort of researchers, from not only inside our labs but from universities all over the U.S. and all over the world, someone has to build them. And notwithstanding the wonderful endowments of Princeton and MIT and Harvard and Stanford, the resources to pull together stadium-sized light sources that are miles long when it comes to and x-ray free-electron laser, or a computing facility, these are dollar amounts that even the most well-endowed of non-government science organizations would have a hard time doing. And so once again, I think it’s very important and there’s a long history that to a large degree started right after World War II around the importance of having a federal footprint that we could have these facilities that could be open to a broad range of researchers. That’s the core of what we are as a National Lab complex. That’s the basis of who we are. And so this is just an evolution of the next range of us pushing technology in HPC amongst other user facilities for us to invest. And we’re very gratified that we see the nation supporting that at all time high levels.

HPCwire: Having supercomputers, the hardware, is important, but there is no benefit to these powerful machines without the people — people to design and build and program them and also the computational scientists and engineers who utilize them. What role does the DOE have in training and workforce development to ensure a sustainable talent pipeline?

Dabbar: To a reasonable degree, the department takes the lead on hardware, meaning we build a lot of user facilities the others would have a hard time building as I was commenting to. And then we have access to those, both with our own researchers to use them as well as people from outside the lab complex. Part of it is dealing with the actual research itself and moving forward the bounds of knowledge on a particular topic, but we know that we helped develop the whole workforce of the whole chain of everything that we do research on, from the university level and up. To a large degree, we are a very heavy funder of graduate school training, effectively, through our grant processes. So as we go through each and every one of our particular areas of science, last year’s number was $3.1 billion a year of grants; this year’s number will be a little higher.

The grants that go out to a large degree are spread to universities all over the United States. A principal investigator makes a grant proposal whether it’s to use a particular HPC machine or to use a light source or to do some research at their own lab or university. So we spread around a lot of money to help drive research, but we clearly know a secondary aspect of that is that the PI has a whole bunch of grad students and others that are supporting them, undergrads, that are supporting them in the labs. So whenever I go to universities, I hear about people who receive our grants who are working on something very specific and it’s not just the principal investigator who may be a senior researcher or a professor at a university but the money flows down all the way through in terms of all the lab support all the research support. We know a very important secondary aspect of our entire grant program is that we feed the whole system in terms of STEM, in terms of education. It’s an important part of not only the DOE, but the National Science Foundation, National Institutes of Health, the whole federal complex–this is a big part of what we do for the nation.

HPCwire: Here at Supercomputing and within the high-performance computing sphere, there is ongoing outreach to communicate the value of HPC to the broader community. How do you explain to colleagues, stakeholders and everyday people why supercomputing is important and why it’s worth investing in?

Dabbar: I think the world is increasingly becoming knowledgeable about the importance of computing power, of data. Obviously there are broad discussions and use by the broad community of all sorts of information technology that just a decade ago just was not even available. So I think it’s much easier to discuss this for people to have knowledge of the importance of this, than not so many years ago. I think the importance is that many people don’t realize that the National Labs and what we do and research even exist, and that’s actually one of the big challenges that I have. It’s not [around the value of] big data and information technology and artificial intelligence — the average person actually gets a lot of the basics of those now nowadays. But what about the big facilities? What are the big things worked on? Why is it important to have those? I think the knowledge of that is actually not good at all. So one of the things that Secretary Rick Perry and myself and the rest of the leadership team have been trying very hard to do is to increase communication around the National Lab complex capabilities and what we do, such as Sierra, such as Summit, such as high performance computing, and what we do with those capabilities within the broader user facility complex for science. We will try to make a little dent in that while we are in leadership in this position.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

KAUST Leverages Mixed Precision for Geospatial Data

July 28, 2021

For many computationally intensive tasks, exacting precision is not necessary for every step of the entire task to obtain a suitably precise result. The alternative is mixed-precision computing: using high precision wher Read more…

Oak Ridge Supercomputer Enables Next-Gen Jet Turbine Research

July 27, 2021

Air travel is notoriously carbon-inefficient, with many airlines going as far as to offer purchasable carbon offsets to ease the guilt over large-footprint travel. But even over just the last decade, major aircraft model Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IBM Quantum System One assembled outside the U.S. and follows Read more…

AWS Solution Channel

Data compression with increased performance and lower costs

Many customers associate a performance cost with data compression, but that’s not the case with Amazon FSx for Lustre. With FSx for Lustre, data compression reduces storage costs and increases aggregate file system throughput. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

Summer Reading: “High-Performance Computing Is at an Inflection Point”

July 21, 2021

At last month’s 11th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART), a group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a “position paper” in which they argue HPC architectural landscape... Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire