New National HPC Strategy Is Bold, Important and More Daunting than US Moonshot

By William Gropp and Thomas Sterling

August 6, 2015

“In order to maximize the benefits of HPC for economic competitiveness and scientific discovery, the United States Government must create a coordinated Federal strategy in HPC research, development, and deployment.” With these words, the President of the United States established the National Strategic Computing Initiative (NSCI) through Executive Order to implement this whole-of-government strategy in collaboration with industry and academia. Not since the signing of legislation in 1991 for the HPCC initiative has the nation articulated a bold and specific goal for the advancement of HPC and the benefits to be derived. While not over constraining the details of how exascale computing is to be achieved and exploited, this executive order establishes a national framework, objectives, and federal agency responsibilities across the government to regain international leadership and address the daunting technical challenges in the employment of exascale technologies.

Among the strategic goals of this singular national endeavor is the unification of data-intensive and compute-intensive approaches to architecture, system software, and programming methodologies and tools to maximize the benefits of HPC for the US. Today, systems in these arenas are perceived as distinct in role and structure. But it is recognized by expert practitioners in these sub-domains that each must rely, sometimes heavily, on the capabilities of the other. Computing in the science disciplines is often heavily engaged in the manipulation of data both from external sources and of their own creation requiring management of the entire storage hierarchy including mass storage while depending on I/O and system bandwidth for rapid data transport. Conversely, big data applications including graph analytics require massive concurrency of operation consistent with design properties of HPC systems. And of course, they both are derived from the same enabling technologies. It is indicative of this complementarity that such leadership vendors as Cray Inc. and IBM Corp. are equally focused on both aspects of the high end computing. Therefore, a national realization and goal of bringing the two domains into a single composite mutually supportive computing fabric will accelerate the ability to achieve sophisticated computational products integrating both modalities and also facilitate both through the research and development projected through this national enterprise.

The scale of computing mandated by this Presidential order for which cooperative research across the country is to be energized is “exascale” which is neither limited to a single parameter like FLOPS, nor a single operating point like 1 exaflops. Exascale is as much about data storage capacity and transport as it is related to arithmetic capability. And while 1 exaflops sustained performance on some select workload will serve as a demonstrable milestone, it will only be one of many across a broad performance regime that may span orders of magnitude in capability.

Further, the charter for the NSCI is not about creating a stunt machine for national stature – quite the opposite. It is about the deployment and application of systems delivering 1 exaflops and more sustained performance on real-world computational challenges of importance to the country and its society. This is to be accomplished through a cohesive multi-agency collaboration as well as public-private sector partnerships, over a sustained period of effort of probably well over a decade. Not simply targeting a next and arbitrary milestone, the NSCI directs the creation of a strategic vision and realistic Federal investment strategy for the US “to sustain and enhance its scientific, technological, and economic leadership position in HPC research, development, and deployment” and to transition HPC research into development of operational systems. This is about real world impact and opportunity of a future generation of supercomputing to be derived through the innovation and skills of the nations diverse and best skill force in a unified effort.

Not merely motivating the necessary achievement of exascale computing, this initiative intends to accelerate delivery of computing by two orders of magnitude with respect to contemporary system performance even as it merges the two dominant classes of STEM and Big Data computing that today appear as separate forms. Of significance is the explicit recognition and acknowledgement of the end of Moore’s Law and the need to establish a viable path forward for improved capabilities beyond these asymptotic limitations of anticipated semiconductor technology. Towards the effective utility of such sustained computing capability and capacity, it is stated as an objective to ensure and deploy a national HPC ecosystem capable of providing easy access to US resources for economic competitiveness, scientific discovery, and national security. A key strategy to this end is to encourage collaboration among the public and private sectors for the sharing of results of research and development.

The plan of action is to leverage the expertise, missions, and historical capabilities of many federal agencies working in concert to bring the full strengths of the nation in alignment. The lead agencies designated are the Department of Energy, the Department of Defense, and the National Science Foundation, each with its specific roles and responsibilities. IARPA and NIST will provide important foundational research and development capabilities for future computing paradigms and advanced measurement methods. And a number of agencies will deploy such future systems for their mission-driven objectives including NASA, FBI, NIH, DHS, and NOAA. This overall process will be guided by and receive oversight by an Executive Council comprising OSTP and OMB.

The NSCI framework leaves many facets of its implementation to the planning process to be lead by the Executive Council and involve the contributing bodies of the participating Federal agencies. This provides flexibility in determining the details of carrying out this mandate even as it sets the goals and charter for the contributing entities. Heavy reliance on US computer industry and academic research is emphasized even as the direction is derived by the mission agencies. The budget and its profile over the many years is unspecified but the requirements that such a budget needs to enable is clearly represented. An overarching philosophy that permeates the NSCI is one of cooperation and collaboration demanding a culture of community mutual involvement and sharing. This is a new challenge as well and one that may prove as significant as the purely technical ones. Historical tensions at many levels will have to be overcome but success at the national level may only be realized through a renaissance of mutually supportive engagement. Without this, US preeminence in exascale computing may prove unrealizable.

The NSCI charter is a balanced agenda of research and development of future technologies and methodologies as well as responsible deployment of systems and infrastructures to carry through the mission-critical obligations of the diverse participating agencies. The benefits sought are for the economy, society, and security of the nation and its citizens. It is a call to engagement. More than the next moonshot, it demands the talents, creativity, resources, and commitments of the nation’s forces be brought to bear on the needs of the country in the next generations in computing even as the ways of synthesizing practical experience and future innovation have yet to be prescribed.

This is a very exciting time but one that will demand responsible consideration and conviction as the new map of the field of exascale computing is being charted. This is not just the next American moonshot. It is more than a moonshot. As daunting as landing on the Moon was more than a generation ago, we understood the physics of the problem, where the Moon would be and when, and what success looked like. NSCI does not have such certainty. Perhaps most importantly it must create a future technological context with long lasting consequences and unending application. When Eugene Cernan stepped back on the ladder of Apollo 17 in 1972, he left the last footprints to impress the surface of the Moon in more than 50 years. We’ve never gone back. NSCI must build the bridge to the future of computing across which the US only goes forward. It will create the new physics, the new math (e.g., parallel algorithms), the new concepts of programming, architecture, and supportive software and infrastructures that will launch the US to the furthest frontiers of computing opportunity. But NSCI is only the first step. It is now the responsibility of this nation’s creators and users to come together in mutual supporting roles to advance this mandate in the service of the country and its people.

William Gropp and Thomas Sterling are co-editors of HPCwire’s Exascale Edition

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

South African Weather Service Doubles Compute and Triples Storage Capacity of Cray System

February 13, 2019

South Africa has made headlines in recent years for its commitment to HPC leadership in Africa – and now, Cray has announced another major South African HPC expansion. Cray has been awarded contracts with Eclipse Holdings Ltd. to upgrade the supercomputing system operated by the South African Weather Service (SAWS). Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This