NERSC Accepts Edison Supercomputer

January 30, 2014

Jan. 30 — The National Energy Research Scientific Computing (NERSC) Center recently accepted “Edison,” a new flagship supercomputer designed for scientific productivity.

Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) on Feb. 5, and scientists are already reporting results.

About 5,000 researchers working on 700 projects and running 600 different codes compute at NERSC, which is operated by Berkeley Lab. They produce an average of 1,700 peer-reviewed publications every year, making NERSC the most productive scientific computing center serving the Department of Energy’s Office of Science.

“We support a very broad range of science, from basic energy research to climate science, from biosciences to discovering new materials, exploring high energy physics and even uncovering the very origins of the universe,” said NERSC Director Sudip Dosanjh.

Edison can execute nearly 2.4 quadrillion floating-point operations per second (petaflop/s) at peak theoretical speeds. While theoretical speeds are impressive, “NERSC’s longstanding approach is to evaluate proposed systems by how well they meet the needs of our diverse community of researchers, so we focus on sustained performance on real applications,” said NERSC Division Deputy for Operations Jeff Broughton, who led the Edison procurement team.

“For us, what’s really important is the scientific productivity of our users,” Dosanjh said. That’s why Edison was configured to handle two kinds of computing equally well: data analysis and simulation and modeling.

Data Analysis Joins Simulation and Modeling

Traditionally, scientific supercomputers are configured to simulate and model complex phenomena, such as nanomaterials converting electricity into photons of light, climate changing over decades or centuries, or interstellar gases forming into stars and galaxies. Simulations require a lot of processors running in unison, but not necessarily a lot of memory for each processor.

Data analysis, such as genome sequencing or molecular screening programs that search for promising new materials or drugs,  often involves high throughput computing—running large numbers of loosely coupled simulations simultaneously. Such “ensemble computing” requires more memory per node and has typically been relegated to separate computer clusters. As instruments and experiments deliver more and more data however, scientists need more computing power to crunch it; so smaller clusters no longer suffice.

“Facilities throughout the Department of Energy are being inundated with data that researchers don’t have the ability to understand, process or analyze sufficiently,” said Dosanjh. Historically, NERSC was an exporter of data as scientists ran large-scale simulations and then moved that data to other sites. But with the growth of experimental data coming from other sites, NERSC is now a net importer, taking in a petabyte of data in fields such as biosciences, climate and high-energy physics each month.

Both types of computing rely heavily on moving data, said Dosanjh. “So Edison has been optimized for that: It has a really high-speed interconnect, it has lots of memory bandwidth, lots of memory per node, and it has very high input/output speeds to the file system and disk system.”

“If you have a computing resource like Edison, one with the flexibility to run different classes of problems, then you can apply the full capacity of your system to the problem at hand, whether that be high-throughput genome sequencing or highly parallel climate simulations,” said Broughton.

Less Time Tweaking Codes, More Time Doing Science

Because Edison does not employ accelerators, such as graphics processing units (GPUs), scientists have been able to move their codes from NERSC’s previous flagship system (a Cray XE6 named for computer scientist Grace Hopper) to Edison with little or no changes, another consideration meant to keep scientists doing science instead of rewriting code.

“We were able to open Edison to all our users shortly after installation for testing, and the system was immediately full,” said Broughton. By the time Edison was accepted and placed into production, scientists had logged millions of processor hours of research into areas as varied as carbon sequestration, nanomaterials, cosmology, and combustion.

And while researchers may not see or appreciate Edison’s advances in energy efficiency, it will impact their ability to do science. “In coming years, performance will be more limited by power than anything else, so energy efficiency is critical,” said Dosanjh.

Free Cooling

In preparation for its 2015 move into a custom-built data center (the Computational Research and Theory facility), Edison is the first supercomputer at NERSC to rely solely on outside air for cooling, a technique known as “free cooling.” Edison is cooled without mechanical chillers. Instead water is circulated through outdoor cooling towers and back into the system’s internal radiators, which cool air rather than heat it. Fans located between each pair of cabinets in a row pull air in one end; circulate it through a radiator, over the hot components and on to the next set of cabinets before it exits at the row’s end. This side-to-side airflow, or transverse cooling, is more energy efficient than the typical front-to-back flow of most systems.

Edison will be dedicated on Ferbruary 5 as part of the annual NERSC Users Group being held February 3-6 at Berkeley Lab. “As we celebrate NERSC’s 40th anniversary, it’s quite fitting we start the year by dedicating Edison, a system that embodies our guiding principle over the last four decades: computing in the service of science,” said NERSC director Dosanjh.

Deployment of Edison was made possible in part by funding from DOE’s Office of Science and the DARPA High Productivity Computing Systems program.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.  For more information, please visit science.energy.gov.

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 4,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science.

—–

Source: NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This