NSCI Discussion at HPC User Forum Shows Hunger for Details

By John Russell

April 20, 2016

Is the National Strategic Computing Initiative in trouble? Launched by Presidential Executive Order last July, there are still few public details of the draft implementation plan, which was delivered to the NSCI Executive Council back in October. Last week on the final day of the HPC User Forum being held in Tucson, Saul Gonzalez Martirena (OSTP, NSF) gave an NSCI update talk that contained, really, no new information from what was presented at SC15.

As he was heading out, and aware that a open discussion on NSCI was scheduled for later in the day, Martirena asked one of the meeting’s organizers (IDC Research VP, Bob Sorensen) to take good notes, adding “if there is any possibility send them to me by tomorrow. We are really looking for good ideas.” It’s too bad he missed the discussion.

Starved for details and perhaps becoming tone-deaf to NSCI aspirations, the late afternoon discussion was wide-ranging and concern-ridden relative to NSCI’s reality. The first member of the gathered group to venture an opinion, a very senior member of the HPC community, said simply, “It’s all [BS].” What followed was candid conversation among the forty or so attendees who stuck around for the final session of the forum.

NCSI, of course, is a grand plan to ensure the U.S. maintains a leadership position in high performance computing. It’s five objectives, bulleted below, represent frank recognition by the U.S. government, or at least the current administration, that HPC leadership is vital for advancing science, ensuring national defense, and maintaining national economic competitiveness. Now, nearly nine months after its start, there’s still not much known about the plan to operationalize the vision.

The five NSCI objectives, excerpted from the original executive order, are:

  1. “Accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs.
  2. Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing.
  3. Establishing, over the next 15 years, a viable path forward for future HPC systems even after the limits of current semiconductor technology are reached (the “post- Moore’s Law era”).
  4. Increasing the capacity and capability of an enduring national HPC ecosystem by employing a holistic approach that addresses relevant factors such as networking technology, workflow, downward scaling, foundational algorithms and software, accessibility, and workforce development.
  5. Developing an enduring public-private collaboration to ensure that the benefits of the research and development advances are, to the greatest extent, shared between the United States Government and industrial and academic sectors.”

Sorensen was a good choice for moderator. Before joining IDC he spent 33 years in the federal government as a science and technology analyst covering HPC for DoD, Treasury, and the White House. He was involved during the early formation of NSCI and remains an advocate; that said he has since written that more must be done to ensure success (see NSCI Update: More Work Needed on Budgetary Details and Industry Outreach).

NSCI discussion points
NSCI discussion points

Views were decidedly mixed in the audience, which was mostly drawn from academia, national labs, and industry. Sorensen kicked things of with a list of discussion points (see on left) but discussion wandered extensively.

What did seemed clear is that lacking a concrete NSCI implementation plan to react to, members of the audience defaulted to ad hoc concerns and attitudes, sometimes predictably characteristic of the segment of the HPC community to which they belonged, but also often representative of the diversity of opinion within segments. Two of the more contentious issues were the picking of winners and losers by government and the challenges in creating an enduring national HPC ecosystem through pubic-private efforts.

Uncle Sam Not Good At Picking Winners
Every time an RFP goes out, there’s a winner and a loser, said Sorensen. One attendee recalled the government-funded effort to develop a national aerodynamic simulator in the 1980s as something less than successful . “They funded Control Data Corp. and Burroughs Corporation. Somebody asked Cray how come you’re not going after any of that. Seymour [Cray] said ‘when I build my machine they will decide that ‘s the machine they really want.’ And he built the Cray 2 and Burroughs dropped out of the business. [In the end] CDC supplied a dead machine and Cray won the business. The point is for years the government has tried to pick winners and losers and hasn’t been successful.”

Bob Sorensen, IDC
Bob Sorensen, IDC

Sorensen, in his introductory remarks, noted further there is an inherent “dichotomy” in the program. The folks who are doing this, DOE, NNSA want the best HPC systems in the world [because] leadership here means greater potential for greater national security. [While] at the same time we want a vibrant HPC infrastructure that builds the best equipment in the world and sells it to anyone that has money that.”

Indeed, mention was made of the recent report that China – denied Intel’s latest chips roughly one year ago by the U.S. Commerce Department– would soon bring on two 100 petaflops machines made with Chinese components and planing to benchmark one in time for the next Top500 list (June). One comment was “The Chinese are giving a gift to this program. Imagine what Trump is going to say. We are going to be portrayed as being way behind the Chinese and get out the check book because we have to catch up.”

It was hardly smooth sailing for the sprawling NSCI blueprint. Still, it would be very inaccurate to say the mood was anti-NSCI; rather so much uncertainty remains that there was little to focus on. The devil is in the details, said one attendee. Funding, HPC training, software issues (modernization and ISV interest), big box envy, the politically charged environment, clarity of NSCI goals, and program metrics were all part of the discussion mix.

Acknowledged but not discussed at length was the fact that the NSCI might not survive the charged political atmosphere of an election year and might not be supported by the next administration. During Q&A following his earlier presentation Gonzalez Martirena was cautiously optimistic that bipartisan support around national security and national competitiveness issues was possible.

Broadly, the difficulties of democratizing HPC dominated concerns. Buying and building supercomputers for national and academic purposes is a more traveled road where best practices (and stumbling blocks) are better known.

Here is a brief sampling of a few issues raised:

The “ISV Problem”
In a rare show of consensus, many thought enticing ISVs to embrace HPC would be a major hurdle. Indeed, software presents challenges on several fronts – from modernizing code to run on exascale machines to simply making HPC software more widely available to industry were discussed

Unless ISVs sees larger scale HPC as a lucrative market for they won’t have the incentive to scale their software was the general opinion. Consequently, companies who are completely dependent on commercial applications would discover to their movement into the HPC world limited by software availability and cost.

Moreover, NSCI’s seeming intense attention on hardware could become problematic. Throughput, at least for industrial HPC, is far more important than impressive machine specs. Perhaps, suggested one attendee, what’s needed is an X Prize of sorts to incentivize ISVs to go after these ‘world’s hardest’, meaningful work.

The Big Box Syndrome
A far amount of discussion was given to DoE and NSCI’s apparent focus on producing exascale machines. Talking about the early NSCI planning, Sorensen noted, “We talked long and hard about using exacale. It really came down to we don’t need an exascale machine, we need exascale technologies, that could be sitting on someone’s desktop. I remember the day the NSCI came out, the headline in Washington was ‘New Supercomputer’. It’s like no, don’t you understand. We are not talking about the top ten systems anymore; we need to at least deal with 100,000 technical servers out there.”

Certainly academia, national labs, and DoE do care about big machines. One person said these programs always make him wonder is there’s a hidden agenda by “people who just always want to get the faster system and NSCI is sort of being steered in that direction.”

HPC Workforce
The HPC skill shortage is a widely acknowledged problem. Young talent races to the start-up world, not HPC. Several approaches were bandied about, ranging from better use of formal training at the national labs and DoE to creation of new outreach programs. Even so, one attendee also wondered if a small company with limited engineering talent would be able or willing to allow those resources get needed training.

Getting the word out for existing training resources is an issue, said one attendee, who noted DoE doesn’t have a marketing budget per se to alert companies that training is available at centers. “It’s not like a company that has a marketing budget like Intel and IBM that’s going out and telling people all the time about this. That’s probably a barrier to getting the word out about resources are available.”

What Should Success Look Like?
For all its grand goals, the gathering wondered what NSCI success should look like, particularly if the idea was to achieve more than incremental success in economics or science or just build an exascale computer.

Merle Giles, director of Private Sector Programs and Economic Impact at the National Center for Supercomputing Applications, and co-editor of the text, Industrial Applications of High-Performance Computing: Best Global Practices, said “Look at the game-changing events that affected the economy in this country. They were all an order of magnitude of 10X to 100X changes. It was railroad, [etc]. We don’t need to extract those last ten percent of performance of the machine. We need 10X to 100X and we can be really sloppy and still be really good. The 10X to 100x is not just the technology – it’s not exascale that will change the entire nation. It is greater access [to HPC resources] for those who can take advantage of that access.”

In that vein, another attendee added that NSCI is a projection of what was done in the past. What’s needed instead is to fundamentally think differently, saying, “Probably the biggest advantage comes from miniaturization of systems, not the biggest systems.”

One missing element to the entire program, agreed Gonzalez Martirena after his presentation, is more extensive interaction is industry. He showed a slide of responses to the RFI issued by NSF last fall (shown here) indicating roughly 200 academia/national lab responses with just eight from industry. Perhaps industry should form a group of representatives to NSCI RFIworks with NSCI, suggested Gonzalez Martirena to HPCwire.

Sorensen indicated IDC would send along the group’s comments to NASCI and Gonzalez Martirena, who recently moved back from OSTP to his position as program director of the division of physics at NSF. It seemed clear from the breadth of the discussion that the lack of a definite NSCI plan has created a something of vacuum for the HPC community and impatience for more detail.

One attendee offered, “How many people are left in the room, 40? I’d be willing to bet there are 40 different visions about what [NSCI] success looks like. We’re having lots of conversation but I think we are down in the weeds.”

NSCI Resources
NSCI Update: More Work Needed on Budgetary Details and Industry Outreach; http://www.hpcwire.com/2016/03/10/nsci-update-more-work-needed-on-budgetary-details-and-industry-outreach/

Speak Up: NSF Seeks Science Drivers for Exascale and the NSCI; http://www.hpcwire.com/off-the-wire/sc15-releases-latest-invited-talk-spotlight-randal-bryant-and-tim-polk/

HPC User Forum Presses NSCI Panelists on Plans; http://www.hpcwire.com/2015/10/06/speak-up-nsf-seeks-science-drivers-for-exascale-and-the-nsci/

Podcast: Industry Leaders on the Promise & Peril of NSCI; http://www.hpcwire.com/2015/08/27/podcast-industry-leaders-on-the-promise-peril-of-nsci/

New National HPC Strategy Is Bold, Important and More Daunting than US Moonshot; http://www.hpcwire.com/2015/08/06/new-national-hpc-strategy-is-bold-important-and-more-daunting-than-us-moonshot/

White House Launches National HPC Strategy; http://www.hpcwire.com/2015/07/30/white-house-launches-national-hpc-strategy/

President Obama’s Executive Order ‘Creating a National Strategic Computing Initiative’; http://www.hpcwire.com/off-the-wire/creating-a-national-strategic-computing-initiative/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This