ESnet Applying Global Networking Expertise to GRETA Spectrometer for Experiments at Michigan Facility

August 4, 2020

Aug. 4, 2020 — For decades, ESnet engineers have deployed the latest technologies and developed critical tools to build a high-speed network that crisscrosses the nation and spans the Atlantic Ocean. Now, a small team is doing the same for a specialized network that will transport and organize data across distances measured in feet rather than thousands of miles.

Nuclear physicists at Berkeley Lab are building the GRETA experiment, short for Gamma Ray Energy Tracking Array. The gamma ray detector will be installed at the Department of Energy’s Facility for Rare Isotope Beams (FRIB) located at Michigan State University in East Lansing.

The GRETA spectrometer will go online with first physics in 2024. When complete it will house an array of 120 detectors that will produce up to 480,000 messages per second — totaling 4 gigabytes of data per second — and send them through a computing cluster for analysis. While the data will traverse a network of about 50 meters, the system has been designed so the data could easily be sent to more distant high performance computing systems.

“We will be analyzing everything in real time, on the fly with no intermediate storage,” said Mario Cromaz, a Berkeley Lab physicist in charge of the computing component of GRETA. “We had an idea of how we wanted the computing to work, but it was also a networking problem and we didn’t have the technical where-with-all so we approached ESnet. That’s the kind of expertise we could only find at ESnet.”

ESnet network engineer Eli Dart, who is the computing system architect for the project, said ESnet agreed to help so that networking could be integrated into the project early in a way that is scalable and extensible. Dart also sees it as potentially the start of something even bigger — a system that is a building block for the “Superfacility” concept to seamlessly stitch together experiments, networks and computing resources.

“It’s a strategic experiment on ESnet’s part — if we can get in early and help with the design, we can try to help the experiment do things that would otherwise be very difficult,” Dart said. “In a deep collaboration like this, we can learn what’s important in the context of the experiment, and that can help us improve our services to the scientific community.”

First of its kind

A rendering of GRETA, the Gamma-Ray Energy Tracking Array. Image courtesy of Berkeley Lab.

GRETA is a gamma ray spectrometer, which will measure the energy of gamma rays created by nuclear collisions inside a compact sphere of high-purity germanium crystals with unprecedented resolution. It consists of a total of 120 highly segmented large-volume, coaxial germanium crystals, combined in groups of four to form a total of 30 Quad Detector Modules.

Cromaz said GRETA is the first of its kind in that it will track the positions of the scattering paths of the gamma rays using an algorithm specifically developed for the project. This capability will help scientists understand the structure of nuclei, which is not only important for understanding the synthesis of heavy elements in stellar environments, but also for applied-science topics in nuclear energy, nuclear forensics, and stockpile stewardship.

Since the excited nuclei emitting the gamma rays are moving very fast — at a large fraction of the speed of light — they create a Doppler effect. In order to accurately measure their energy, Cromaz said scientists need to know the angle the ray is coming from. The capability to do this is what makes GRETA unique.

The project team has just finished the design phase and the next formal project review will be in early August. To get this far, a one-quarter version of the experiment, called GRETINA, was built with prototypes to test the concepts and is currently performing experiments at Michigan State University. With a favorable August review, the GRETA team anticipates asking the DOE for approval to commence construction by the end of the fiscal year.

Bringing order to the data

According to Cromaz, the detectors built with field programmable gate arrays will spray out packets of data, which is relatively simple to do. The hard part is creating a buffer to catch the data, and to feed it into the network to the thousands of threads of computation running in the cluster for analysis.

“There are actually two phases to the analysis in the gamma ray tracking array,” Cromaz said. “The first phase is locating where the interaction points of the gamma ray with the detector material occurred and the second phase is looking at all interaction points globally in the detector and subdividing/ordering them into likely gamma ray tracks.”

The first computing stage derives the number and location of interaction points. This phase only depends on the digitized signals from a given detector crystal (there are 120 crystals which tile the sphere).

“In GRETA, it’s advantageous to arrange things this way as converting the raw digitized waveforms to interaction points — essentially a set of x, y, z coordinates and energies — reduces the data volume by an order of magnitude,” Cromaz said. “This reduces the load on the second phase, the global event builder, and allows us to implement it on a single node, which simplifies the overall design.”

Eric Pouyoul, who leads ESnet’s testbed efforts, designed the forward buffer to quickly collect the data, which will then be pulled into analysis jobs by the computing cluster. The forward buffer must receive the high-speed packet streams from 120 detectors with zero packet loss, and then feed the data to the cluster asynchronously.

Pouyoul said the project was challenging on a number of levels, from the physics involved to the nature of the data to the demands of real-time processing. The first step was to write the computing code and algorithms for handling the data. Although he has written high performance code in the past, this project required him to use other skills he’s developed over the years. Once he had the software, he needed to make sure it could handle the outpouring of data.

“The simulation of the crystals was relatively easy,” he said. “But the simulation of the physics–the nuclear behavior at the heart of GRETA–I never did anything like this before.”

Since not all of the crystals would detect every interaction, Pouyoul used a statistical-based model to recreate what would happen inside the detector. He also had to make the code efficient so it could run on the actual hardware GRETA will use. “I was able to build the model of the physics inside GRETA,” he said, “but don’t expect me to really understand it.”

“The first phase is the most computationally intensive part of the process as the maximum data generation rate is 480,000 calculations per second and each calculation requires about five milliseconds per CPU core, hence the requirement for a cluster.” Cromaz said.

From there, the data will then pass through a second system also designed by Pouyoul and called the “global event builder.” Using software written by Pouyoul, the system looks at the timestamps on all the incoming data and then reassembles them into a single stream of events ordered by the time stamps. Additionally, the algorithm also determines which event each piece of data belongs to and assembles them appropriately. This data will be stored for additional analysis based on timestamps and events.

“This has to happen in real time,” said Pouyoul, who called the project the most exciting work he has done in his 11 years at the lab. “Moving the data from the events through the system to storage cannot take more than 10 seconds.”

While the GRETA project has been gratifying for ESnet, it will also provide more experience toward developing the “Superfacility” concept developed by Berkeley Lab’s Computing Sciences organization. The Superfacility framework comprises the seamless integration of experimental and observational instruments with computational and data facilities using high-speed networking. While the concept is straightforward, achieving it requires resolving any number of smaller issues, which vary by facility.

“Because it was designed to be ultimately connected to the wider network, GRETA will be Superfacility-ready,” Dart said. “We see GRETA as a strategic experiment on ESnet’s part; if we get involved early, we can help with the design and help the experiment do things that otherwise could have been very difficult.

“The fun part of all this is that we would like to see GRETA be a proving ground for this type of environment and then see it be widely adopted,” Dart said. “In fact, we’re already received inquiries from other sites. If we can help others take advantage of what we’ve learned, then everybody wins.”

About ESnet

The Energy Sciences Network (ESnet) is a high-performance, unclassified network built to support scientific research. Funded by the U.S. Department of Energy’s Office of Science (SC) and managed by Lawrence Berkeley National Laboratory, ESnet provides services to more than 50 DOE research sites, including the entire National Laboratory system, its supercomputing facilities, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting DOE-funded scientists to productively collaborate with partners around the world.


Source: Jon Bashor, ESnet

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at current count) across the European Union and supplanting HPC Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for high-performance computing, a newly created position that is a Read more…

By Tiffany Trader

Swiss Supercomputer Enables Ultra-Precise Climate Simulations

September 17, 2020

As smoke from the record-breaking West Coast wildfires pours across the globe and tropical storms continue to form at unprecedented rates, the state of the global climate is once again looming in the public eye. Owing to Read more…

By Oliver Peckham

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk management, and high-frequency trading, as told by a group of l Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

Legacy HPC System Seeds Supercomputing Excellence at UT Dallas

September 16, 2020

What happens to supercomputers after their productive life at an academic research center ends? The question often arises when people hear that the average age of a top supercomputer at retirement is about five years. Rest assured — systems aren’t simply scrapped. Instead, they’re donated to organizations and institutions that can make... Read more…

By Aaron Dubrow

AWS Solution Channel

Next-generation aerospace modeling and simulation: benchmarking Amazon Web Services High Performance Computing services

The aerospace industry has been using Computational Fluid Dynamics (CFD) for decades to create and optimize designs digitally, from the largest passenger planes and fighter jets to gliders and drones. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Nvidia Commits to Buy Arm for $40B

September 14, 2020

Nvidia is acquiring semiconductor design company Arm Ltd. for $40 billion from SoftBank in a blockbuster deal that catapults the GPU chipmaker to a dominant position in the datacenter while helping troubled SoftBank reverse its financial woes. The deal, which has been rumored for... Read more…

By Todd R. Weiss and George Leopold

AMD’s Massive COVID-19 HPC Fund Adds 18 Institutions, 5 Petaflops of Power

September 14, 2020

Almost exactly five months ago, AMD announced its COVID-19 HPC Fund, an ongoing flow of resources and equipment to research institutions studying COVID-19 that began with an initial donation of $15 million. In June, AMD announced major equipment donations to several major institutions. Now, AMD is making its third major COVID-19 HPC Fund... Read more…

By Oliver Peckham

HPC Strategist Dave Turek Joins DNA Storage (and Computing) Company Catalog

September 11, 2020

You've heard the saying "flash is the new disk and disk is the new tape," which traces its origins back to Jim Gray*. But what if DNA-based data storage could o Read more…

By Tiffany Trader

Google’s Quantum Chemistry Simulation Suggests Promising Path Forward

September 9, 2020

A much-anticipated prize in quantum computing is the ability to more accurately model chemical bonding behavior. Doing so should lead to better chemical synthes Read more…

By John Russell

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This