Convey Bends to Inflection Point

By Nicole Hemsoth

October 7, 2011

It’s difficult to ignore the momentum of conversations about data-intensive computing and it’s bleed-over into high performance computing, especially as we look toward November’s SC11 event, which puts the emphasis squarely on big data driven problems.

A number of traditional HPC players, including SGI, Cray and others, are making a push to associate some of their key systems with the specific needs of data-intensive computing. Following a conversation this week with Convey Computer’s Director of Marketing, Bob Masson, and Kevin Wadleigh, the company’s resident math libraries wizard, it was clear that Convey plans to be all over the big data map—and that this trend will continue to demand new architectures that pull the zing of FLOPS in favor of more efficient compute and memory architectures.

The duo from Convey talked at length about the inflection point that is happening in HPC. According to Masson, this shift in emphasis to data-intensive computing won’t ever replace the need for numerically-intensive computing, but opens a new realm within HPC—one that is timed perfectly with the steady influx of data from an unprecedented number of sources.

A recent whitepaper (that prompted our chat with Convey) noted that HPC is “no longer just numerically intensive, it’s now data-intensive—with more and different demands on HPC system architectures.” Convey claims that the “whole new HPC” that is gathered under the banner of data-intensive computing possesses a number of unique characteristics. These features include data sizes in the multi-petabyte and beyond range, high ratio of memory accesses to computing, extremely parallelizable read access/computing, highly dynamic data that can often be processed in real time.

Wadleigh put this move in historical context, pointing to the rapid changes in the 1980s as the industry cycled through a number of architectures meant to maximize floating point performance. While it eventually picked its champion, this process took many years—one could even argue decades—before the most efficient and best performing architecture emerged.

He says this same process is happening, hence the idea of the “inflection point” in high performance computing. While again, the power of the FLOP will not be diminished, when it comes to efficient systems that are optimized for the growing number of graph algorithms deployed to tackle big data problems, massive changes in how we think about architecture will naturally evolve. Of course, if you ask Bob or Kevin—that evolution is rooted in some of the unique FPGA coprocessor and memory subsystem designs their company is offering via their so-called Hybrid Core Architecture.  

While these are all traits that are collected under the “big data” or “data intensive” computing category, another feature—the prevalence of graph algorithms—is of great importance. Problems packing large sets of structured and unstructured data elements are becoming more common in research and enterprise, a fact that warranted a new set of benchmarks to measure graph algorithm performance.

As the preeminent benchmark for data-intensive computing, the Graph 500, measures system performance on graph problems using a standardized measure for determining the speed it takes the system to transverse the graph. While this could be a short book on its own, suffice to say, the Graph 500 website has plenty of details about the benchmark algorithm—and details about the top performing systems as announced at ISC this past summer. Convey feels confident about its position on the list (after not placing in the top ten for the last list in June) and notes that this year’s Graph 500 champions (TBA at SC11) will either have spent a boatload of money on sheer cores and memory—or will have come up with more efficient approaches to solve efficiency and performance challenges of these types of problems.

Convey claims that when it comes to architectures needed to support this type of computing, standard x86 “pizza box” systems falls way short in terms of a lack of inherent parallelism, memory architecture that is poorly mapped to the type of memory accesses, and a lack of synchronization primitives.

They say that with systems like their Hybrid Core Architecture line, some of these problems are solved, bringing a range of features those with data-intensive computing needs have been asking for. Among the “most desirable” architectural features of this newer class of systems is the need to de-emphasize the FLOPS and concentrate on maximizing memory subsystem performance. Accordingly, they stress their FPGA coprocessor approach to these needs, stating that such systems can be changed on the fly to meet the needs of the application’s compute requirements.

Convey’s high-bandwidth memory subsystem is key to refining the performance and efficiency of graph problems. Their approach to designing a memory system, for instance, that only spits back what was asked for and optimizing aggregate bandwidth, are further solutions. However, even with these features, users need to be able to support thousands of concurrent outstanding requests, thus providing top-tier multi-threading capabilities is critical.

The question is, if HPC as a floating point-driven industry isn’t serving the architectural needs of the data-intensive computing user, what needs to change? According to Wadleigh, who spent his career engaged with math libraries, the differentiator between Convey (and commodity systems, for that matter) is the two-pronged approach of having large memory and a unique memory subsystem. Such a subsystem would be ideal for the kinds of “scatter gather” operations that are in high demand from graph problems.

Wadleigh said that “most memory system today have their best performance when accessing memory sequentially because memory systems bring in a cache line worth of data with 8 64-bit points. Now, as long as you’re using that, it’s great—but if you look at a lot of these graph problems, half of the accesses are to random data scattered around memory, which is very bad when you’re thinking about this for traditional architectures.” He claims that standard x86 systems, at least for these problems, have bad cache locality, bad virtual memory pagetable locality, and these are also bad patterns for distributed memory parallelism.

One other element that Convey stresses is that data-intensive computing systems, at least in terms of their own line, need to have hardware-based synchronization primitives. With the massive parallelism involved, synchronization in read and writes to memory has to be refined. They state that “when the synchronization mechanism is ‘further away’ from the operation, more time is spent waiting for the synchronization with a corresponding reduction in efficiency of parallelization.” In plain English, maintaining this synchronization at the hardware level within the memory subsystem can yield better performance.

With the focus on data-intensive computing at the heart of SC11 and companies with rich histories in HPC, including Convey jockeying for positions across both the Top500 and the Graph 500, it’s not hard to see why the Convey team thinks of this time as an inflection point in high performance computing, and why they think their Hybrid Core architecture is positioned to take advantage of this.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

US Seeks to Automate Video Analysis

January 16, 2018

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their Read more…

By George Leopold

URISC@SC17 and the #LongestLastMile

January 11, 2018

A multinational delegation recently attended the Understanding Risk in Shared CyberEcosystems workshop, or URISC@SC17, in Denver, Colorado. URISC participants and presenters from 11 countries, including eight African nations, 12 U.S. states, Canada, India and Nepal, also attended SC17, the annual international conference for high performance computing, networking, storage and analysis that drew nearly 13,000 attendees. Read more…

By Elizabeth Leake, STEM-Trek Nonprofit

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This