NSF Forges Further Beyond FLOPs

By Nicole Hemsoth

May 22, 2013

The NSF recently sent out a high performance system solicitation to broaden their range of capabilities and provide a more “inclusive computing environment” for science and engineering, which while closed to new submissions, has opened the door to a few questions.

According to the agency, some of the new problem areas they want to address involve applications “that are extremely data intensive and may not be dominated by floating point operation speed.  While a number of the earlier acquisitions have addressed a subset of these issues, the current solicitation emphasizes this even further.”

With NSF-funded systems like Blue Waters and Stampede up and running, the agency says that there are other needs the scientific community has expressed, particularly as they relate to solving data-intensive challenges. Although this is not to say that they’ve turned a blind eye to hyper-performance systems, the solicitation makes little mention of what similar solicitations yielded when they decided on systems like Stampede, for instance,

In other words, we gave your FLOPs already, folks. It’s time for something new.

Among the elements that the NSF has deemed worthy of funding are:

  • Complement existing XD capabilities with new types of computational resources attuned to less traditional computational science communities;
  • Incorporate innovative and reliable services within the HPC environment to deal with complex and dynamic workflows that contribute significantly to the advancement of science and are difficult to achieve within XD;
  • Facilitate transition from local to national environments via the use of virtual machines;
  • Introduce highly useable and cost efficient cloud computing capabilities into XD to meet national scale requirements for new modes of computationally intensive scientific research; 
  • Expand the range of data intensive and/or computationally-challenging science and engineering applications that can be tackled with current XD resources;
  • Provide reliable approaches to scientific communities needing a high-throughput capability:
  • Provide a useful interactive environment for users needing to develop and debug codes using hundreds of cores or for scientific workflows/gateways requiring highly responsive computation;
  • Deal effectively with scientific applications needing a few hundred to a few thousand cores;
  • Efficiently provide a high degree of stability and usability by January, 2015

To better understand how these “big data” driven needs intersect with other large-scale computing initiatives, including exascale ambitions, we talked with Barry Schneider and Irene Qualters, both program directors in the division of advanced cyberinfrastructure in the computer and information scinces directorate.

The two dealt directly with the acquisitions of Blue Waters, Stampede, Kraken, Gordon, Blacklight, and other research systems. They also work within the XSEDE program to ensure that researchers have access to required computational resources. Qualters says that the NSF has focused on large-scale, high performance systems in the form of Blue Waters and Stampede, “and those are highly usable and fit what people need computationally.” Still, she says, the NSF is not just trying to expand the number of services—they’re trying to broaden the scope of them.

Qualters and Schneider agree that when it comes to pushing funding toward exascale systems or data-intensive challenges, there is not an either/or distinction since both areas feed different streams of research. However, the NSF has gathered details from user communities about what they require and the broadening array of new scientific instruments (everything from new telescopes to gene sequencers) has yielded a definite call to deal with ever-larger, more diverse, and complex data from across several fields.

 “We have been interested in data-intensive for quite some time and that focus is there but we’re also recognizing that new communities are having diff computational needs based on the types of research they’re involved with—this could data-intensive tools or just an expansion of visualization capability, for instance. We want to make sure that they have the cyberinfratructure to do so and do it at a national level,” said Qualters.

Schneider explained that it would send the wrong message to send if it came across that this solicitation was a purely data-intensive call since his team is looking for a balanced set of resources for XSEDE projects and researchers who have stretched the current capabilities of their university machines. However, he said that research groups need to have access to other resources, including everything from virtual machines to new hardware and software tools to allow them to make use of broadening data types and volumes.

“Not everyone needs 100,000 cores,” Schneider said. Most of the researchers they work with via XSEDE and the systems that form its backbone are simply looking for the most efficient way to get their science on the table. He noted that for now the focus is on these new hardware and software tools to support the new needs, but there is nothing preventing them from switching course in two years and funding another system to trump Blue Waters or Stampede. It’s all about what the community tells them is needed, he stressed.

To arrive at the priorities included in their goals for data, software, campus bridging, security and education within the larger computational and data-driven science and engineering, the NSF gathers input from their own internal experts and six task force committees dedicated to specific areas. Last February, the NSF released their vision for the next generation of advanced computing infrastructure for science and engineering, the goal of which was to ensure that research communities had access to the needed computational resources to move forward.

This set of principles guides their funding course for the current cycle and while exascale projects are nowhere in sight, there are some unique technologies that are finally getting a chance to shine. As for exascale in general, Qualters says that for the NSF, it’s not a matter of if, it’s a question of how and when. She emphasized the belief that there is a big difference between what her agency sees as exascale and what the benchmarks show are different—but reiterated that funding decisions won’t be an question of choosing exascale over “big data” science, it will be a decision based on what the research community needs at the time and what is practical for real-world applications.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics cod Read more…

By Rob Farber

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yesterday, Intel reported an Optane and DAOS-based system finishe Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows users to virtually “walk” around the massive supercomputer Read more…

By Oliver Peckham

Supercomputer Simulations Examine Changes in Chesapeake Bay

August 8, 2020

The Chesapeake Bay, the largest estuary in the continental United States, weaves its way south from Maryland, collecting waters from West Virginia, Delaware, DC, Pennsylvania and New York along the way. Like many major e Read more…

By Oliver Peckham

Student Success from ‘Scratch’: CHPC’s Proof is in the Pudding

August 7, 2020

Happy Sithole, who directs the South African Centre for High Performance Computing (SA-CHPC), called the 13th annual CHPC National conference to order on December 1, 2019, at the Birchwood Conference Centre in Kempton Pa Read more…

By Elizabeth Leake

AWS Solution Channel

University of Adelaide Provides Seamless Bioinformatics Training Using AWS

The University of Adelaide, established in South Australia in 1874, maintains a rich history of scientific innovation. For more than 140 years, the institution and its researchers have had an impact all over the world—making vital contributions to the invention of X-ray crystallography, insulin, penicillin, and the Olympic torch. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yeste Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows use Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implem Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This