Data Vortex Users Contemplate the Future of Supercomputing

By Tiffany Trader

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia to share their experiences with Data Vortex machines and have a larger conversation about transformational computer science and what future computers are going to look like.

Coke Reed and John Johnson with PEPSY at PNNL

The meeting opened with Data Vortex Founder and Chairman Dr. Coke Reed describing the “Spirit of Data Vortex,” the self-routing congestion-free computing network that he invented. Reed’s talk was followed by a series of tutorials and sessions related to programming, software, and architectural decisions for the Data Vortex. A lively panel discussion got everyone thinking about the limits of current computing and the exciting potential of revolutionary approaches. Day two included presentations from the user community on the real science being conducted on Data Vortex computers. Beowulf cluster inventor Thomas Sterling gave the closing keynote, tracing the history of computer science all the way back from antiquity up to current times.

“This is a new technology but it’s mostly from my perspective an opportunity to start rethinking from the ground up and move a little bit from the evolutionary to the revolutionary aspect,” shared user meeting host PNNL research scientist Roberto Gioiosa in an interview with HPCwire. “It’s an opportunity to start doing something different and working on how you design your algorithm, run your programs. The idea that it’s okay to do something revolutionary is an important driver and it makes people start thinking differently.”

Roberto Gioiosa with JOLT at PNNL

“You had that technical exchange that you’d typically see in a user group,” added John Johnson, PNNL’s deputy director for the computing division. “But since we’re looking at a transformational technology, it provided the opportunity for folks to step back and look at computing at a broader level. There was a lot of discussion about how we’re reaching the end of Moore’s law and what’s beyond Moore’s computing – the kind of technologies we are trying to focus on, the transformational computer science. The discussion actually was in some sense, do we need to rethink the entire computing paradigm? When you have new technologies that do things in a very very different way and are very successful in doing that, does that give you the opportunity to start rethinking not just the network, but rethinking the processor, rethinking the memory, rethinking input and output and also rethinking how those are integrated as well?”

The heart of the Data Vortex supercomputer is the Data Vortex interconnection network, designed for both traditional HPC and emerging irregular and data analytics workloads. Consisting of a congestion-free, high-radix network switch and a Vortex Interconnection Controller (VIC) installed on commodity compute nodes, the Data Vortex network enables the transfer of fine-grained network packets at a high injection rate.

The approach stands in contrast to existing crossbar-based networks. Reed explained, “The crossbar switch is set with software and as the switches grow in size and clock-rate, that’s what forces packets to be so long. We have a self-routing network. There is no software management system of the network and that’s how we’re able to have packets with 64-bit headers and 64-bit payloads. Our next-gen machine will have different networks to carry different sized packets. It’s kind of complicated really but it’s really beautiful. We believe we will be a very attractive network choice for exascale.”

Data Vortex is targeting all problems that require either massive data movement, short packet movement or non-deterministic data movement — examples include sparse linear algebra, big data analytics, branching algorithms and fast fourier transforms.

The inspiration for the Data Vortex Network came to Dr. Reed in 1976. That was the year that he and Polish mathematician Dr. Krystyna Kuperberg solved Problem 110 posed by Dr. Stanislaw Ulam in the Scottish Book. The idea of Data Vortex as a data carrying, dynamical system was born and now there are more than 30 patents on the technology.

Data Vortex debuted its demonstration system, KARMA, at SC13 in Denver. A year later, the Data Vortex team publicly launched DV206 during the Supercomputing 2014 conference in New Orleans. Not long after, PNNL purchased its first Data Vortex system and named it PEPSY in honor of Coke Reed and as a nod to Python scientific libraries. In 2016, CENATE — PNNL’s proving ground for measuring, analyzing and testing new architectures — took delivery of another Data Vortex machine, which they named JOLT. In August 2017, CENATE received its second machine (PNNL’s third), MOUNTAIN DAO.

MOUNTAIN DAO is comprised of sixteen compute nodes (2 Supermicro F627R3-FTPT+ FatTwin Chassis with 4 servers each), each containing two Data Vortex interface cards (VICs), and 2 Data Vortex Switch Boxes (16 Data Vortex 2 level networks, on 3 switch boards, configured as 4 groups of 4).

MOUNTAIN DAO is the first multi-level Data Vortex system. Up until this generation, the Data Vortex systems were all one-level machines, capable of scaling up to 64 nodes. Two-level systems extend the potential node count to 2,048. The company is also planning for three-level systems that will be scalable up to 65,653 nodes, and will push them closer to their exascale goals.

With all ports utilized on 2-level MOUNTAIN DAO, L2 applications depict negligible L1 to L2 performance differences.

PNNL scientists Gioiosa and Johnson are eager to be exploring the capabilities of their newest Data Vortex system.

“If you think about traditional supercomputers, the application has specific characteristics and parameters that have evolved to match those characteristics. Scientific simulation workloads tend to be fairly regular; they send fairly large messages so the networks we’ve been using so far are very good at doing that, but we are facing a new set of workloads coming up — big data, data analytics, machine learning, machine intelligence — these applications do not look very much like the traditional scientific computing so it’s not surprising that the hardware we been using so far is not performing very well,” said Giosiosa.

“Data Vortex provides an opportunity to run both sets of workloads, both traditional scientific application and matching data analytics application in an efficient way so we were very interested to see how that was actually working in practice,” Gioiosa continued. “So as we received the first and second system, we started porting workloads, porting applications. We have done a lot of different implementations of the same algorithm to see what is the best way to implement things in these systems and we learned while doing this and making mistakes and talking to the vendor. The more we understood about the system the more we changed our programs and they were more efficient. We implement these algorithms in ways that we couldn’t do on traditional supercomputers.”

Johnson explained that having multiple systems lets them focus on multiple aspects of computer science. “On the one hand you want to take a system and understand how to write algorithms for that system that take advantage of the existing hardware and existing structure of the system but the other type of research that we like to do is we liked to get in there and sort of rewire it and do different things, and put in the sensors and probes and all different things, which can help you bring different technologies together but would get in the way of porting algorithms directly to the existing architecture so having different machines that have different purposes. It goes back to one of the philosophies we have, looking at the computer as a very specialized scientific instrument and as such we want it to be able to perform optimally on the greatest scientific challenges in energy, environment and national security but we also want to make sure that we are helping to design and construct and tune that system so that it can do that.”

The PNNL researchers emphasized that even though these are exploratory systems they are already running production codes.

“We can run very large applications,” said Gioiosa. “These applications are on the order of hundreds of thousands of lines of code. These are production applications, not test apps that we are just running to extract the FLOPS.”

At the forum, researchers shared how they were using Data Vortex for cutting-edge applications, quantum computer simulation and density function theory, a core component in computational chemistry. “These are big science codes, the kind you would expect to see running on leadership-class systems and we heard from users who ported either the full application or parts of the application to Data Vortex,” said Johnson.

“This system is usable,” said Gioiosa. “You can run your application, you can do real science. We saw a simulation of quantum computers and people in the audience who are actually using a quantum computer said this is great because in quantum computing we cannot see the inside of the computer, we only see outside. It’s advancing understanding of how quantum algorithms work and how quantum machines are progressing and what we need to do to make them mainstream. I call it science, but this means production for us; we don’t produce carts but we produce tests and problems and come up with solutions and increase discovery and knowledge so that is our production.”

Having held a successful first user forum, the organizers are looking ahead to future gatherings. “There are events that naturally bring us together, like Supercomputing and other big conferences, but we are keen to have this forum once every six months or every year depending on how fast we progress,” said Gioiosa. “We expect it will grow as more people who attend will go back to their institution and say, oh this was great, next time you should come too.”

What’s Next for Data Vortex

The next major step on the Data Vortex roadmap is to move away from the commodity server approach they have employed in all their machines so far to something more “custom.”

“What we had in this generation is a method of connecting commodity processors,” said Dr. Reed. “We did Intel processors connected over an x86 (PCIe) bus. Everything is fine grained in this computer except the Intel processor and the x86 bus and so the next generation we’re taking the PCIe bus out of the critical path. Our exploratory units [with commodity components] have done well but now we’re going full custom. It’s pretty exciting. We’re using exotic memories and other things.”

Data Vortex expects to come out with an interim approach using FPGA-based compute nodes by this time next year. Xilinx technology is being given serious consideration, but specific details of the implementation are still under wraps. (We expect more will be revealed at SC17.) Current generation Data Vortex switches and VICs are built with Altera Stratix V FPGAs and future network chip sets will be built with Altera Stratix 10 FPGAs.

Data Vortex has up to this point primarily focused on big science and Department of Defense style problems, but now they are looking at expanding the user space to explore anywhere there’s a communication bottleneck. Hyperscale and embedded systems hold potential as new market vistas.

In addition to building its own machines, Data Vortex is inviting other people to use its interconnect in their computers or devices. In fact, the company’s primary business model is not to become a deliverer of systems. “We’ve got the core communication piece so we’re in a position now where we’re looking at compatible technologies and larger entities to incorporate this differentiating piece to their current but more importantly next-generation designs,” Data Vortex President Carolyn Coke Reed Devany explained. “What we’re all about is fine-grained data movement and that doesn’t necessarily have to be in a big system, that can be fine-grained data movement in lots of places.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire