CENIC Recognizes UCSC’s Hyades Supercomputer Cluster Connection to LBNL’s Computing Center

March 1, 2018

LA MIRADA, Calif. and BERKELEY, Calif., Mar. 1, 2018 — The project connecting the University of California Santa Cruz’s Hyades Supercomputer Cluster to the Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center is being awarded the CENIC 2018 Innovations in Networking Award for Research Applications.

Project leaders being recognized are members of the University of California Santa Cruz (UCSC) Astronomy and Astrophysics Department: Piero Madau, Joel Primack, J. Xavier Prochaska, Enrico Ramirez-Ruiz, and Stan Woosley; members of the UCSC Science DMZ team from UCSC Information Technology Services: Shawfeng Dong, George Peek, Joshua Sonstroem, Brad Smith, and Jim Warner; Peter Nugent at the Computational Research Division at Lawrence Berkeley National Laboratory; and John Graham with the Qualcomm Institute at the California Institute for Telecommunications and Information Technology (Calit2).

Astronomy and astrophysics are disciplines that require processing massive amounts of data. A single night’s survey of the sky with a state-of-the-art telescope can yield a tremendous amount of data, which is often analyzed in real time. Today, nearly all scientific research and data analysis involves remote collaboration. To work effectively and efficiently on multi-institutional projects, researchers depend heavily on high-speed access to large data sets and computing resources.

UCSC responded to this challenge by organizing a unique collaboration between scientists and technologists. Several units within UCSC’s Information Technology Services worked with UCSC researchers to win an NSF grant to establish a campus Science DMZ. A Science DMZ is an architecture developed by the US Department of Energy’s Energy Sciences Network (ESnet) to support faculty and research projects. The UCSC Science DMZ offers a 100 Gbps network connection between UCSC and participating institutions.

This effort dovetails with the Pacific Research Platform (PRP), now in development by researchers at UC San Diego and UC Berkeley in collaboration with CENIC. The PRP integrates Science DMZs on multiple campuses into a high-capacity regional “freeway system” that makes it possible to move large amounts of data between scientists’ labs and their collaborators’ sites, supercomputer centers, or data repositories without performance degradation.

Using PRP, UCSC connected its Hyades Supercomputer Cluster to the Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) over CalREN, CENIC’s 100 Gbps optical network, and over 100 Gbps peering with ESnet providing connectivity to NERSC. Peter Nugent, an astronomer and cosmologist from the Computational Research Division of LBNL, was pivotal in this effort. This connection enables UCSC to carry out the high-speed transfer of large data sets produced at NERSC, which supports the Dark Energy Spectroscopic Instrument (DESI) and AGORA galaxy simulations, at speeds up to five times previous rates. These speeds have the potential to be increased 20 times the previous rates in 2018.

“To accelerate the rate of scientific discovery, researchers must get the data they need, where they need it, and when they need it,” said UC San Diego computer science and engineering professor Larry Smarr, Principal Investigator of the PRP and director of Calit2. “This requires a high-performance data freeway system in which we use optical lightpaths to connect data generators and users of that data.”

Initially, when Hyades was connected at 10 Gbps to the campus production network, data transfer was slow and cumbersome. Then, in March 2017, PRP provided a FIONA box to facilitate data transfer between UCSC and NERSC. A FIONA box, or Flash I/O network appliance, is constructed out of commodity parts. FIONA boxes are highly optimized for data-centric applications and act as “data super-capacitors,” increasing the possible bandwidth speed to 40 Gbps or greater.

Computational astrophysicists at UCSC now regularly use the supercomputing resources at NERSC and routinely transfer terabytes of data between Hyades and NERSC. For example, the enhanced speed has greatly facilitated work by cosmology researchers Piero Madau and Joel Primack, who have been using supercomputers to simulate and visualize the evolution of the universe and the formation of galaxies while comparing the predictions of these theories to the latest observational data. The new connection also supports astrophysicists Enrico Ramirez-Ruiz and Stan Woosley, who use supercomputers to simulate violent explosive events like supernovae and gamma-ray bursts.

Brad Smith, Interim Vice Chancellor of IT Services at UCSC and PI on the NSF grants used to fund the Science DMZ work, said: “The ITS division at UCSC is thrilled to be able to facilitate this important astrophysics research. With funding from NSF we were able to build a 100 Gbps connected Science DMZ, and through collaboration with CENIC and the PRP project we were able to connect our Science DMZ with important data sources around the country such as NERSC. It is through highly collaborative projects like this that UCSC continues to deliver on its mission to keep California on the cutting edge of scientific discoveries.”

“By setting up a campus Science DMZ, using the data-transfer node infrastructure at the NERSC facility, and using the PRP cyberinfrastructure running over CENIC’s 100 Gbps optical links, UCSC is now able to transfer data sets at faster and faster speeds. The scientific achievements in this award are enabled by high-functioning physical and human networks, both of which are essential and notable,” said Louis Fox, President and CEO of CENIC.

The CENIC Innovations in Networking Awards are presented each year at CENIC’s annual conference to highlight the exemplary innovations that leverage ultra-high bandwidth networking, particularly where those innovations have the potential to transform the ways in which instruction and research are conducted or where they further the deployment of broadband in underserved areas. The CENIC conference will be held March 5 – 7, 2018, in Monterey, California.

About CENIC

CENIC connects California to the world — advancing education and research statewide by providing the world-class network essential for innovation, collaboration, and economic growth. This nonprofit organization operates the California Research and Education Network (CalREN), a high-capacity network designed to meet the unique requirements of over 20 million users, including the vast majority of K-20 students together with educators, researchers and individuals at other vital public-serving institutions. CENIC’s Charter Associates are part of the world’s largest education system; they include the California K-12 system, California Community Colleges, the California State University system, California’s public libraries, the University of California system, Stanford, Caltech, the Naval Postgraduate School, and USC. CENIC also provides connectivity to leading-edge institutions and industry research organizations around the world, serving the public as a catalyst for a vibrant California.

About the UCSC Department of Astronomy and Astrophysics

UC Santa Cruz is a world-renowned leader in the fields of astronomy and astrophysics. Scientists have designed new pathways for observational discovery from Earth and from space. They designed the Fermi Gamma-ray Space Telescope, the Keck Telescopes on Mauna Kea, and the Automated Planet Finder at Lick Observatory — and figured out how to fix the flawed optics on the Hubble Space Telescope. They partner in NASA exploration, including the Kepler planet-finding telescope and Cassini mission to Saturn. UC Santa Cruz manages the UC Observatories, which operates the Lick Observatory on Mt. Hamilton, oversees UC’s partnership in the W.M. Keck Observatory in Hawaii, and is the central coordinator for UC’s participation in the international Thirty Meter Telescope project.

About the Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC)

The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation.


Source: CENIC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This