XSEDE Panel Highlights Diversity of NSF Computing Resources

By Trish Barker, Assistant Director for Public Affairs at NCSA

July 31, 2015

A plenary panel at the XSEDE15 conference, which took place this week in St. Louis, Mo., highlighted the broad spectrum of computing resources provided by the National Science Foundation, including several new and testbed projects and an effort to help more people use cyberinfrastructure to advance their research.

“I don’t think there has been a time previously when NSF funded the diversity of systems that are available today,” said panelist Craig Stewart, the associate dean of research technologies at Indiana University.

Irene Qualters, leader of the Division of Advanced Cyberinfrastructure within NSF’s Computer & Information Science & Engineering Directorate, kicked off the panel with an overview of how “the conduct and the practice of research are changing,” and how this is driving changes in cyberinfrastructure. In particular, she called out the rapid growth in data from diverse sources, including instruments and sensors and simulation; the increasing complexity of research problems, requiring multidisciplinary teams and multiscale modeling; wider global investment in research, providing more opportunities for collaboration; growing need for technically skilled workforce; and the need for increased societal responsibility and engagement.

NSF has responded to these and other drivers by fielding a diverse array of resources, each of which was spotlighted by one of the panelists:

  • Comet, a computing resource focused on the small and medium jobs that represent the “long-tail of science,” at the San Diego Supercomputer Center (SDSC). Comet entered production in May 2015.
  • Jetstream, a cloud system with hardware at Indiana University and TACC that is slated to go into production early in 2016.
  • Wrangler, a data-intensive system that includes hardware at the Texas Advanced Computing Center (TACC) and Indiana University
  • Bridges, a data-centric system slated to go into production early in 2016 at the Pittsburgh Supercomputing Center (PSC).
  • Chameleon and CloudLab, testbeds for research on cloud computing.

“I think all of the systems we’re talking about this morning did some interesting and deep analysis of usage patterns” to determine what researchers needed, said Stewart.

For example, SDSC Director Mike Norman said that data from 2012 showed that 99 percent of jobs run on XSEDE-allocated resources used fewer than 2,000 cores and 30 percent used just a single core. Based on that information, SDSC decided to focus Comet on those small to medium jobs, and even to under-allocate the resource so people can get quicker access. They aim to serve 10,000 users per year on Comet, a metric Norman thinks will be easily achieved, in part through embracing Science Gateways.

Jetstream is also aimed at aspects of the long-tail of science, Stewart explained. This cloud system is designed to provide interactive and on-demand computing capabilities via a suite of virtual machines. Users can customize, save, and share VMs—something that Stewart pointed out will make it easier to repeat and reproduce research. And like Comet, Jetstream embraces Science Gateways, working with the iPlant and Galaxy gateways.

A biologist by training, Stewart said that he recently tested the Jetstream interface to see if he could easily “do a little science.”

“It took me about 10 minutes to log in and do something on iPlant and about two hours to do the same thing using Amazon, so the interface really works,” he said.

Both Wrangler and Bridges focus on data needs. Niall Gaffney, director of Data Intensive Computing at TACC, pointed out that traditional high-performance computing systems and ways of working are often mismatched with the needs of data-intensive research. “Databases are not job,” he said. “Scratch is not a storage solution. Hadoop is not always HPC file system-friendly.”

Wrangler is intended to handle big data, lots of small data, structured and unstructured data, and both sequential and random I/O. It also needs to support a large number of applications and interfaces, including Hadoop, Spark, R, GIS, and others.

According to Gaffney, the highly flexible 600 TB flash storage system with bandwidth of 1 TB/sec is one of the most innovative features of Wrangler. “You can connect all 600 TB to one node if that’s what you need,” he said.

As an example of how Wrangler is enabling new data-centric activities, Gaffney said that OrthoMCL, a genomic workflow, would previously not complete on any TACC resource, but now runs in under four hours on Wrangler.

Construction of the data-centric Bridges system will begin in October, according to Nick Nystrom, director of Strategic Applications at PSC. Echoing other panelists, Nystrom agreed that Science Gateways are critical, particularly for communities that are not currently using HPC resources. “Many users don’t want to become programmers,” he said. “Gateways let them avoid a lot of complexity that people associate with traditional supercomputing.”

Bridges will include a pilot project with Temple University, focused on streamlining interoperation and helping people easily move from using campus resources to using nationally available resources such as those provided through XSEDE. “When Temple’s resources are at peak, some jobs can be migrated transparently to Bridges. And conversely, when Bridges is saturated, we can move jobs to Temple,” Nystrom explained.

In addition to these four compute systems available through XSEDE, the panel also highlighted two cloud computing testbeds, Chameleon and CloudLab, which give researchers the opportunity to build and test their own clouds. “There’s still a lot of work to be done in making clouds better and imagining what clouds will look like in the future,” said CloudLab’s Robert Ricci, a research assistant professor at the University of Utah.

The final panelist, Clemson University Jim Bottum, emphasized the need to provide training and assistance so more people from more disciplines can take advantage of all of these diverse computing resources.

“There is a training and education gap between resources and researchers,” he said. “There’s a high barrier to entry without human assistance, and the barriers become higher as we bring in new communities.”

Bottum leads the NSF-supported ACI-REF project, which has begun addressing this gap by enlisting facilitators who can act as “research concierges” for people who are looking for computing resources (or who may not even know what resources are available or how they could impact their research) and by offering training.  The goal is to grow the user base, both in terms of the number of people and the number of disciplines using cyberinfrastructure.

After just its first year, ACI-REF’s “concierges” have had 800+ consultations with individual researchers and more than 1,000 people have attended training sessions led by ACI-REF.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire