NSF Forges Further Beyond FLOPs

By Nicole Hemsoth

May 22, 2013

The NSF recently sent out a high performance system solicitation to broaden their range of capabilities and provide a more “inclusive computing environment” for science and engineering, which while closed to new submissions, has opened the door to a few questions.

According to the agency, some of the new problem areas they want to address involve applications “that are extremely data intensive and may not be dominated by floating point operation speed.  While a number of the earlier acquisitions have addressed a subset of these issues, the current solicitation emphasizes this even further.”

With NSF-funded systems like Blue Waters and Stampede up and running, the agency says that there are other needs the scientific community has expressed, particularly as they relate to solving data-intensive challenges. Although this is not to say that they’ve turned a blind eye to hyper-performance systems, the solicitation makes little mention of what similar solicitations yielded when they decided on systems like Stampede, for instance,

In other words, we gave your FLOPs already, folks. It’s time for something new.

Among the elements that the NSF has deemed worthy of funding are:

  • Complement existing XD capabilities with new types of computational resources attuned to less traditional computational science communities;
  • Incorporate innovative and reliable services within the HPC environment to deal with complex and dynamic workflows that contribute significantly to the advancement of science and are difficult to achieve within XD;
  • Facilitate transition from local to national environments via the use of virtual machines;
  • Introduce highly useable and cost efficient cloud computing capabilities into XD to meet national scale requirements for new modes of computationally intensive scientific research; 
  • Expand the range of data intensive and/or computationally-challenging science and engineering applications that can be tackled with current XD resources;
  • Provide reliable approaches to scientific communities needing a high-throughput capability:
  • Provide a useful interactive environment for users needing to develop and debug codes using hundreds of cores or for scientific workflows/gateways requiring highly responsive computation;
  • Deal effectively with scientific applications needing a few hundred to a few thousand cores;
  • Efficiently provide a high degree of stability and usability by January, 2015

To better understand how these “big data” driven needs intersect with other large-scale computing initiatives, including exascale ambitions, we talked with Barry Schneider and Irene Qualters, both program directors in the division of advanced cyberinfrastructure in the computer and information scinces directorate.

The two dealt directly with the acquisitions of Blue Waters, Stampede, Kraken, Gordon, Blacklight, and other research systems. They also work within the XSEDE program to ensure that researchers have access to required computational resources. Qualters says that the NSF has focused on large-scale, high performance systems in the form of Blue Waters and Stampede, “and those are highly usable and fit what people need computationally.” Still, she says, the NSF is not just trying to expand the number of services—they’re trying to broaden the scope of them.

Qualters and Schneider agree that when it comes to pushing funding toward exascale systems or data-intensive challenges, there is not an either/or distinction since both areas feed different streams of research. However, the NSF has gathered details from user communities about what they require and the broadening array of new scientific instruments (everything from new telescopes to gene sequencers) has yielded a definite call to deal with ever-larger, more diverse, and complex data from across several fields.

 “We have been interested in data-intensive for quite some time and that focus is there but we’re also recognizing that new communities are having diff computational needs based on the types of research they’re involved with—this could data-intensive tools or just an expansion of visualization capability, for instance. We want to make sure that they have the cyberinfratructure to do so and do it at a national level,” said Qualters.

Schneider explained that it would send the wrong message to send if it came across that this solicitation was a purely data-intensive call since his team is looking for a balanced set of resources for XSEDE projects and researchers who have stretched the current capabilities of their university machines. However, he said that research groups need to have access to other resources, including everything from virtual machines to new hardware and software tools to allow them to make use of broadening data types and volumes.

“Not everyone needs 100,000 cores,” Schneider said. Most of the researchers they work with via XSEDE and the systems that form its backbone are simply looking for the most efficient way to get their science on the table. He noted that for now the focus is on these new hardware and software tools to support the new needs, but there is nothing preventing them from switching course in two years and funding another system to trump Blue Waters or Stampede. It’s all about what the community tells them is needed, he stressed.

To arrive at the priorities included in their goals for data, software, campus bridging, security and education within the larger computational and data-driven science and engineering, the NSF gathers input from their own internal experts and six task force committees dedicated to specific areas. Last February, the NSF released their vision for the next generation of advanced computing infrastructure for science and engineering, the goal of which was to ensure that research communities had access to the needed computational resources to move forward.

This set of principles guides their funding course for the current cycle and while exascale projects are nowhere in sight, there are some unique technologies that are finally getting a chance to shine. As for exascale in general, Qualters says that for the NSF, it’s not a matter of if, it’s a question of how and when. She emphasized the belief that there is a big difference between what her agency sees as exascale and what the benchmarks show are different—but reiterated that funding decisions won’t be an question of choosing exascale over “big data” science, it will be a decision based on what the research community needs at the time and what is practical for real-world applications.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

URISC@SC17 and the #LongestLastMile

January 11, 2018

A multinational delegation recently attended the Understanding Risk in Shared CyberEcosystems workshop, or URISC@SC17, in Denver, Colorado. URISC participants and presenters from 11 countries, including eight African nations, 12 U.S. states, Canada, India and Nepal, also attended SC17, the annual international conference for high performance computing, networking, storage and analysis that drew nearly 13,000 attendees. Read more…

By Elizabeth Leake, STEM-Trek Nonprofit

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

Intel, Micron to Go Their Separate 3D NAND Ways

January 10, 2018

The announcement on Monday (Jan. 8) that Intel and Micron have decided to “update” – that is, end – their long-term joint development partnership for 3D NAND technology is nearly as interesting an exercise in pub Read more…

By Doug Black

HPE Extreme Performance Solutions

The Living Heart Project Wins Three Prestigious Awards for HPC Simulation

Imagine creating a treatment plan for a patient on the other side of the world, or testing a drug without ever putting subjects at risk. Read more…

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect application performance by 10-30 percent. The patch makes any call fro Read more…

By Rosemary Francis

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Independent Hyperion Research Will Chart its Own Course

December 19, 2017

Hyperion Research, formerly the HPC research and consulting practice within IDC, has become an independent company with Earl Joseph, the long-time leader of the Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This