The Network as a Scientific Instrument

By Nicole Hemsoth

June 10, 2013

In June 2012, Greg Bell was named the head of the U.S. Department of Energy’s Energy Sciences Network, better known as ESnet.

Funded by the DOE Office of Science, and managed and operated by the ESnet team at Lawrence Berkeley National Laboratory, ESnet provides reliable, high-performance networking capabilities to thousands of researchers tackling many of the world’s most pressing scientific and engineering problems: finding sources of clean energy, understanding climate change, developing advanced materials, and discovering the fundamental nature of our universe. ESnet interconnects scientists at more than 40 DOE sites with experimental and computing facilities in the U.S. and abroad, and with collaborators around the world. 

Invited to give the closing keynote address at the 2012 NORDUnet conference in Oslo, Norway, Bell delivered a presentation entitled “Network as Instrument: The View from Berkeley,” in which he argued that it’s time to start thinking about research networks as instruments for discovery, not just infrastructures for service delivery. The talk struck a chord with the audience, and Bell has since been invited to give versions of the presentation at conferences in the United States and Canada. Most recently, he contributed the April 25 keynote address at the THINK Conference 2013 organized by ORION, the high-speed network linking 1.8 million researchers in Ontario, Canada.

A video of Bell giving a version of this presentation at a meeting on the genomics of energy and the environment, sponsored by the DOE Joint Genome Institute, can be found at the end of the article.

In this Q&A for HPCwire, Berkeley Lab Computing Sciences Communications Manager Jon Bashor talks with Bell about his vision, ESnet news and more.

Question: To start, can you give us a short description of ESnet?

Bell: We’re the Department of Energy’s high-performance networking facility, engineered and optimized for large-scale science. ESnet was created in 1986, making it one of the longest-operating research networks in the world.

ESnet interconnects the entire national lab system, including its supercomputer centers and dozens of large-scale user facilities. Thanks to ESnet, tens of thousands of scientists around the world can transfer data, access remote resources, and collaborate productively. 

ESnet is more than a network, though — it’s a collection of skilled and dedicated people, and a great place to work. Even though we’re located near Silicon Valley, we find it relatively easy to attract talent, because we do cutting-edge engineering in the service of scientific discovery. 

Q: In a sense, ESnet has always been at the forefront of handling Big Data, and now the rest of the community is catching up. How big is Big Data on ESnet?

Bell: Scientific data sets can be truly enormous, up to petabytes in size, and they’re growing rapidly. Sometimes we use the term “Extreme Data” to distinguish data at this scale from the Big Data you’ve read about in other contexts.

The advent of extreme-data science naturally has an impact on the amount of traffic ESnet carries. In fact, we’re growing about twice as fast as the commercial internet — our traffic doubles every 18 months. I don’t foresee this trend slowing down any time soon, because the underlying exponential drivers just keep cranking along.

Everyone reading this understands that high-performance computing changed the way large-scale science is conducted. It’s clear that data intensity will have an important impact as well. Modeling and simulation will continue to be critically important, but these tools will be supplemented by new techniques that can extract insight from complex data sets, exchanged and accessed over ultra-fast research networks.

Q: Although ESnet was created in 1986, its profile seems to have risen considerably in the past five years or so. What’s behind this?

Bell: More and more researchers are discovering that networks are critical to their science. Faster networks mean faster discovery. In addition, ESnet was lucky enough to receive significant stimulus funds a few years ago. That investment allowed us to build the world’s first 100 Gbps network at continental scale, in partnership with Internet2. We finished that project just in time: the previous-generation network was showing its age, and we were beginning to outgrow it. Our new architecture gives us lots of headroom, and the ability to develop new architectures for maximizing scientific productivity. 

We’ve also significantly ramped up our activity in the area of science engagement, partnership, and outreach. We understand that building the world’s fastest science network is not sufficient. We need to make it useful to scientists, and easy to use. That’s harder than it sounds, and we’re still developing models for helping scientists take full advantage of the “fast lanes” we’ve engineered for them. 

One final contributor is the success we’re having with applied research and innovation. This critical activity has been enhanced by our dedicated, national-scale 100 Gbps research testbed, which has supported dozens of researchers in the public and private sectors. We’re really trying to push the envelope on a range of topics — including software-defined networking, alternatives to TCP, and security models for 100 Gbps and faster networks. 

While we appreciate the recognition, it’s not really important unless it helps us advance our overall mission, which is to accelerate discovery for DOE’s Office of Science. And I do think it’s having that effect. Vendors are coming to us to ask about the unique challenges of supporting science, and our users are beginning to have much higher expectations of ESnet. These are both good developments. 

Q: You’ve also been busy. Your talk describing the network as an instrument of discovery has led to multiple invited presentations in North America and Europe — and most recently you gave a version of it as the April 25 keynote address at the THINK conference organized by ORION, the high-speed network in Ontario, Canada. What’s the gist of your presentation?

Bell: My overall goal is to inspire the audience to start thinking about networks differently. Modern research networks such as ESnet and Internet2 (and similar networks around the world) can do a lot more than most people imagine. I try to explain how certain collaborations have profited by incorporating advanced networks into their discovery processes. High-energy physics pioneered this model, and other fields are following. I make the argument that research networks such as ESnet have evolved into extensions of large-scale discovery instruments. For example, the discovery of the Higgs Boson would not have been possible without a worldwide grid computing infrastructure, interconnected by high-speed research networks. Harvey Newman at Caltech pioneered this idea years ago, and the world has finally caught up. 

In these presentations, I also give concrete advice about how people can improve networking in their own back yard. ESnet maintains a website devoted to this sort of simple, practical advice: fasterdata.es.net. If you want to start learning about how to use advanced networks more effectively, this is the place to start. It’s a very popular website, with more hits than www.es.net

Q: Why do you think the message has resonated so well in the networking community?

Bell: It’s not surprising that networkers like to hear that their work is important! But there are a couple of deeper reasons as well. In recent years, networking had become a little dull. Thanks to the challenges of extreme data (and also to the advent of software defined networking), it’s a really exciting place to be again. This new energy is very obvious at networking conferences, and in the academic research community. There are a lot of eyes on networking at the moment. 

Q: Last question: What is ESnet focusing on for the coming year? For the next five years?

Bell: Over the next five years, our challenge will be to accommodate the remarkable growth curve in DOE science traffic while simultaneously making the network useful to many more researchers. It’s hard to believe, but even with our new 100 Gbps network and access to underlying optical capacity to carry multiple terabits per second, we will begin to feel a little cramped by 2018-20. At that point, we think we’ll need to light up a new nationwide optical fiber footprint. Whatever else we do, we’re always in a mode of acceleration and growth!

In the coming year, we’ll focus on recruiting about eight new staff, most of them technical. When you consider that we now have about 40 employees, adding eight is significant. We take recruitment very seriously at ESnet. We look for people who are at the top of their game technically, but that’s not enough — they need to be flexible, great communicators, and exemplary colleagues. 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire