Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to answer fundamental questions of the universe since 2001. Over the intervening decades, though, computational cosmology has evolved—and now, Durham University is working with Rockport Networks to test the suitability of switchless networks for ICC workloads.
Networking and the universe
“We basically focus on huge simulations of the universe, starting with the Big Bang and propagating the universe through time, evolving it over time and so on,” explained Alastair Basden, head of the COSMA HPC Service, in an interview with HPCwire. “And then what we do is compare the output of those simulations with what we see with telescopes. Of course, with telescopes, the further away you look, you’re effectively looking back through time—and so we’re able to look at the different stages of our simulations, compare those at different times and really build up a lot of statistics about how well our models match the statistics of the universe we actually see.”
“Now, of course, if we’re simulating the entire universe, it takes months of compute time run on tens of thousands of compute cores,” he continued. “We’ve also … got a high memory requirement, and of course, gravity—unfortunately for us—is a long-range force, so that means that [if] you’ve got nodes simulating one part of the universe, they are actually affected by other nodes (even ones that are quite far away) that are simulating other bits of the universe. So there’s a lot of information-sharing between those, which is why networking is so important for us.”
Testing Rockport at the ICC
Enter Rockport Networks. Founded in 2012, Rockport offers switchless networks, wherein the nodes connect directly to other nodes rather than to switches. Rockport touts this solution as more scalable and lower-latency, resulting in faster workload completion times and reduced energy costs. “The network [for HPC systems] really hasn’t changed in 20, maybe even 30 years—same architectures being used,” said Rockport Networks CTO Matt Williams. “That’s causing a lot of challenges as the compute and storage are being bottlenecked by that pretty old architecture.”
For more than a year, the ICC has been testing Rockport hardware on DINE (the Durham Intelligent NIC Environment), a 24-node system that Basden described as “very much experimental.” The ICC installed Rockport Networks on 16 of those nodes and has been benchmarking the results.
“We’ve not been benchmarking things like HPL or HPCG, the standard HPC benchmarking codes, but rather we’ve been looking at the main specific scientific workloads,” Basden said. “We run this code, and then we artificially add congestion, so this is to simulate noisy neighbors—other codes that will be running at the same time doing other stuff, like pulling data out of storage or writing snapshots.”

The results have been encouraging, as seen above. The team tested three scenarios: one with eight nodes running the workload and four pairs of noisy-neighbor nodes causing congestion; one with 12 nodes running the workload and two pairs of noisy-neighbor nodes; and one with all 16 running the workload. Rockport had a strong showing across the board, particularly in scenarios where the noisy-neighbor nodes came into play.
“As we increase the number of pairs of nodes there, we see that on the InfiniBand network, the performance worsens, so the runtime takes longer,” Basden continued. “Whereas, on the Rockport, it handles that congestion very well and basically isn’t really affected.”
Scaling up with COSMA7
In the wake of that promising initial testing, the ICC has received funding from two UK programs (DiRAC and the exascale-focused ExCALIBUR initiative) to scale up testing of Rockport solutions using COSMA7, the second-newest system in the ICC lineup. COSMA7 has 452 compute nodes, each equipped with 512GB of memory and dual Intel Xeon 5120 CPUs.

“What we plan to do is we plan to take part of COSMA7 … to split that in half,” Basden said. “Currently, it’s all InfiniBand. We’re going to remove the InfiniBand cards from half the nodes and replace them with Rockport Networks cards instead. And what this allows us to do is—still on quite a sizable system, two quite sizable systems—do direct performance comparisons … and really be able to study performance under congestion. So we’re able to do both artificial congestion, but also real-world congestion, since these will be production systems running live on lots of users’ different codes and looking at how performance … is affected by those.”
The retrofitting of COSMA7 began this week and is expected to take a couple of days for installation and another week or so for testing. Once the network is ready, the ICC will run two separate queues: one for the InfiniBand partition, one for the Rockport partition. While the researchers do plan to run controlled benchmarking on the whole or parts of the system, they’re also interested to learn how both networks hold up under real-world congestion and a variety of workloads.
Basden, for his part, is excited to confirm whether Rockport is a technology to follow for future supercomputers. “We’ve already seen [improvements] in the 30 percent mark,” he said of the Rockport installation on DINE, “and we can only expect that to get better.”
Rockport is also excited for the collaboration. “Alastair and his team are going to give us really good insights into what’s working well in the technology and where improvements can be made,” Williams said. And: “I love simulating the universe—it’s such a cool application of HPC.”
Related Reading: TACC and Rockport Networks
Similar testing of Rockport hardware has been ongoing at the Texas Advanced Computing Center (TACC) for some time. Last year, TACC (which has become a “Rockport Center of Excellence”) installed Rockport across 396 nodes of the 8,008-node Frontera system, which ranked 13th on the fall 2021 Top500 list. Frontera also uses InfiniBand as its primary interconnect. Late last year, TACC reported “promising initial results in terms of congestion and latency control.” To learn more about TACC’s experience with Rockport Networks, click here.