Data Vortex Users Contemplate the Future of Supercomputing

By Tiffany Trader

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia to share their experiences with Data Vortex machines and have a larger conversation about transformational computer science and what future computers are going to look like.

Coke Reed and John Johnson with PEPSY at PNNL

The meeting opened with Data Vortex Founder and Chairman Dr. Coke Reed describing the “Spirit of Data Vortex,” the self-routing congestion-free computing network that he invented. Reed’s talk was followed by a series of tutorials and sessions related to programming, software, and architectural decisions for the Data Vortex. A lively panel discussion got everyone thinking about the limits of current computing and the exciting potential of revolutionary approaches. Day two included presentations from the user community on the real science being conducted on Data Vortex computers. Beowulf cluster inventor Thomas Sterling gave the closing keynote, tracing the history of computer science all the way back from antiquity up to current times.

“This is a new technology but it’s mostly from my perspective an opportunity to start rethinking from the ground up and move a little bit from the evolutionary to the revolutionary aspect,” shared user meeting host PNNL research scientist Roberto Gioiosa in an interview with HPCwire. “It’s an opportunity to start doing something different and working on how you design your algorithm, run your programs. The idea that it’s okay to do something revolutionary is an important driver and it makes people start thinking differently.”

Roberto Gioiosa with JOLT at PNNL

“You had that technical exchange that you’d typically see in a user group,” added John Johnson, PNNL’s deputy director for the computing division. “But since we’re looking at a transformational technology, it provided the opportunity for folks to step back and look at computing at a broader level. There was a lot of discussion about how we’re reaching the end of Moore’s law and what’s beyond Moore’s computing – the kind of technologies we are trying to focus on, the transformational computer science. The discussion actually was in some sense, do we need to rethink the entire computing paradigm? When you have new technologies that do things in a very very different way and are very successful in doing that, does that give you the opportunity to start rethinking not just the network, but rethinking the processor, rethinking the memory, rethinking input and output and also rethinking how those are integrated as well?”

The heart of the Data Vortex supercomputer is the Data Vortex interconnection network, designed for both traditional HPC and emerging irregular and data analytics workloads. Consisting of a congestion-free, high-radix network switch and a Vortex Interconnection Controller (VIC) installed on commodity compute nodes, the Data Vortex network enables the transfer of fine-grained network packets at a high injection rate.

The approach stands in contrast to existing crossbar-based networks. Reed explained, “The crossbar switch is set with software and as the switches grow in size and clock-rate, that’s what forces packets to be so long. We have a self-routing network. There is no software management system of the network and that’s how we’re able to have packets with 64-bit headers and 64-bit payloads. Our next-gen machine will have different networks to carry different sized packets. It’s kind of complicated really but it’s really beautiful. We believe we will be a very attractive network choice for exascale.”

Data Vortex is targeting all problems that require either massive data movement, short packet movement or non-deterministic data movement — examples include sparse linear algebra, big data analytics, branching algorithms and fast fourier transforms.

The inspiration for the Data Vortex Network came to Dr. Reed in 1976. That was the year that he and Polish mathematician Dr. Krystyna Kuperberg solved Problem 110 posed by Dr. Stanislaw Ulam in the Scottish Book. The idea of Data Vortex as a data carrying, dynamical system was born and now there are more than 30 patents on the technology.

Data Vortex debuted its demonstration system, KARMA, at SC13 in Denver. A year later, the Data Vortex team publicly launched DV206 during the Supercomputing 2014 conference in New Orleans. Not long after, PNNL purchased its first Data Vortex system and named it PEPSY in honor of Coke Reed and as a nod to Python scientific libraries. In 2016, CENATE — PNNL’s proving ground for measuring, analyzing and testing new architectures — took delivery of another Data Vortex machine, which they named JOLT. In August 2017, CENATE received its second machine (PNNL’s third), MOUNTAIN DAO.

MOUNTAIN DAO is comprised of sixteen compute nodes (2 Supermicro F627R3-FTPT+ FatTwin Chassis with 4 servers each), each containing two Data Vortex interface cards (VICs), and 2 Data Vortex Switch Boxes (16 Data Vortex 2 level networks, on 3 switch boards, configured as 4 groups of 4).

MOUNTAIN DAO is the first multi-level Data Vortex system. Up until this generation, the Data Vortex systems were all one-level machines, capable of scaling up to 64 nodes. Two-level systems extend the potential node count to 2,048. The company is also planning for three-level systems that will be scalable up to 65,653 nodes, and will push them closer to their exascale goals.

With all ports utilized on 2-level MOUNTAIN DAO, L2 applications depict negligible L1 to L2 performance differences.

PNNL scientists Gioiosa and Johnson are eager to be exploring the capabilities of their newest Data Vortex system.

“If you think about traditional supercomputers, the application has specific characteristics and parameters that have evolved to match those characteristics. Scientific simulation workloads tend to be fairly regular; they send fairly large messages so the networks we’ve been using so far are very good at doing that, but we are facing a new set of workloads coming up — big data, data analytics, machine learning, machine intelligence — these applications do not look very much like the traditional scientific computing so it’s not surprising that the hardware we been using so far is not performing very well,” said Giosiosa.

“Data Vortex provides an opportunity to run both sets of workloads, both traditional scientific application and matching data analytics application in an efficient way so we were very interested to see how that was actually working in practice,” Gioiosa continued. “So as we received the first and second system, we started porting workloads, porting applications. We have done a lot of different implementations of the same algorithm to see what is the best way to implement things in these systems and we learned while doing this and making mistakes and talking to the vendor. The more we understood about the system the more we changed our programs and they were more efficient. We implement these algorithms in ways that we couldn’t do on traditional supercomputers.”

Johnson explained that having multiple systems lets them focus on multiple aspects of computer science. “On the one hand you want to take a system and understand how to write algorithms for that system that take advantage of the existing hardware and existing structure of the system but the other type of research that we like to do is we liked to get in there and sort of rewire it and do different things, and put in the sensors and probes and all different things, which can help you bring different technologies together but would get in the way of porting algorithms directly to the existing architecture so having different machines that have different purposes. It goes back to one of the philosophies we have, looking at the computer as a very specialized scientific instrument and as such we want it to be able to perform optimally on the greatest scientific challenges in energy, environment and national security but we also want to make sure that we are helping to design and construct and tune that system so that it can do that.”

The PNNL researchers emphasized that even though these are exploratory systems they are already running production codes.

“We can run very large applications,” said Gioiosa. “These applications are on the order of hundreds of thousands of lines of code. These are production applications, not test apps that we are just running to extract the FLOPS.”

At the forum, researchers shared how they were using Data Vortex for cutting-edge applications, quantum computer simulation and density function theory, a core component in computational chemistry. “These are big science codes, the kind you would expect to see running on leadership-class systems and we heard from users who ported either the full application or parts of the application to Data Vortex,” said Johnson.

“This system is usable,” said Gioiosa. “You can run your application, you can do real science. We saw a simulation of quantum computers and people in the audience who are actually using a quantum computer said this is great because in quantum computing we cannot see the inside of the computer, we only see outside. It’s advancing understanding of how quantum algorithms work and how quantum machines are progressing and what we need to do to make them mainstream. I call it science, but this means production for us; we don’t produce carts but we produce tests and problems and come up with solutions and increase discovery and knowledge so that is our production.”

Having held a successful first user forum, the organizers are looking ahead to future gatherings. “There are events that naturally bring us together, like Supercomputing and other big conferences, but we are keen to have this forum once every six months or every year depending on how fast we progress,” said Gioiosa. “We expect it will grow as more people who attend will go back to their institution and say, oh this was great, next time you should come too.”

What’s Next for Data Vortex

The next major step on the Data Vortex roadmap is to move away from the commodity server approach they have employed in all their machines so far to something more “custom.”

“What we had in this generation is a method of connecting commodity processors,” said Dr. Reed. “We did Intel processors connected over an x86 (PCIe) bus. Everything is fine grained in this computer except the Intel processor and the x86 bus and so the next generation we’re taking the PCIe bus out of the critical path. Our exploratory units [with commodity components] have done well but now we’re going full custom. It’s pretty exciting. We’re using exotic memories and other things.”

Data Vortex expects to come out with an interim approach using FPGA-based compute nodes by this time next year. Xilinx technology is being given serious consideration, but specific details of the implementation are still under wraps. (We expect more will be revealed at SC17.) Current generation Data Vortex switches and VICs are built with Altera Stratix V FPGAs and future network chip sets will be built with Altera Stratix 10 FPGAs.

Data Vortex has up to this point primarily focused on big science and Department of Defense style problems, but now they are looking at expanding the user space to explore anywhere there’s a communication bottleneck. Hyperscale and embedded systems hold potential as new market vistas.

In addition to building its own machines, Data Vortex is inviting other people to use its interconnect in their computers or devices. In fact, the company’s primary business model is not to become a deliverer of systems. “We’ve got the core communication piece so we’re in a position now where we’re looking at compatible technologies and larger entities to incorporate this differentiating piece to their current but more importantly next-generation designs,” Data Vortex President Carolyn Coke Reed Devany explained. “What we’re all about is fine-grained data movement and that doesn’t necessarily have to be in a big system, that can be fine-grained data movement in lots of places.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been emerging from stealth over the last year and a half, is unveili Read more…

By Tiffany Trader

Scientists Conduct First Quantum Simulation of Atomic Nucleus

May 23, 2018

OAK RIDGE, Tenn., May 23, 2018—Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Ph Read more…

By Rachel Harken, ORNL

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

First Xeon-FPGA Integration Launched by Intel

May 22, 2018

Ever since Intel’s acquisition of FPGA specialist Altera in 2015 for $16.7 billion, it’s been widely acknowledged that some day, Intel would release a processor that integrates its mainstream Xeon CPU server chip wit Read more…

By Doug Black

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been eme Read more…

By Tiffany Trader

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This