Reconfigurable Computing Research Pushes Forward

By Nicole Hemsoth

November 20, 2009

Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. The main advantage of reconfigurable computing, or RC for short, is that programmers are able to change the circuitry of the chip on the fly. Thus, in theory, the hardware can be matched to the software, rather than the other way around. While there are a handful of commercial offerings from companies such as Convey Computer, XtremeData, GiDel, Mitrionics, and Impulse Accelerated Technologies, RC is still an area of active research.

In the U.S., the NSF Center for High-Performance Reconfigurable Computing (CHREC, pronounced “shreck”), acts as the research hub for RC, bringing together more than 30 organizations in this field. CHREC is run by Dr. Alan George, who gave an address at the SC09 Workshop on High-Performance Reconfigurable Computing Technology and Applications (HPRCTA’09) on November 15. We got the opportunity to ask Dr. George about the work going on at the Center and what he thinks RC technology can offer to high performance computing users.

HPCwire: FPGA-based reconfigurable computing has captured some loyal followers in the HPC community. What are the advantages of FPGAs for high-performance computing compared to fixed-logic architectures such as CPUs, GPUs, the Cell processor?

Alan George: HPC is approaching a crossroads in terms of enabling technologies and their inherent strengths and weaknesses. Goals and challenges in three principal areas are vitally important yet increasingly in conflict: performance, productivity, and sustainability. For example, HPC machines lauded in the upper tier of the TOP500 list as most powerful in the world are remarkably high in performance yet also remarkably massive in size, energy, heat, and cost, all featuring programmable, fixed-logic devices, for example, CPU, GPU, Cell. Meanwhile, throughout society, energy cost, source, and availability are a growing concern. As life-cycle costs of energy and cooling rise to approach and exceed that of software and hardware in total cost of ownership, these technologies may become unsustainable.

By contrast, numerous research studies show that computing with reconfigurable-logic devices — FPGAs, et al. — is fundamentally superior in terms of speed and energy, due to the many advantages of adaptive, customizable hardware parallelism. Common sense confirms this comparison. Programmable fixed-logic devices no matter their form feature a “one size fits all” or “Jack of all trades” philosophy, with a predefined structure of parallelism, yet attempting to support all applications or some major subset. In contrast, the structure of parallelism in reconfigurable-logic devices can be customized, that is, reconfigured, for each application or task on the fly, being versatile yet optimized specifically for each problem at hand. With this perspective, fixed-logic computing and accelerators are following a more evolutionary path, whereas RC is relatively new and revolutionary.

It should be noted that RC, as a new paradigm of computing, is broader than FPGA acceleration for HPC. FPGA devices are the leading commercial technology available today that is capable of RC, albeit not originally designed for RC, and thus FPGAs are the focal point for virtually all experimental research and commercial deployments, with a growing list of success stories. However, looking ahead more broadly, reconfigurable logic may be featured in future devices with a variety of structures, granularities, functionalities, etc., perhaps very similar to today’s FPGAs or perhaps quite different.

HPCwire: What role, or roles, do you see for RC technology in high performance computing and high performance embedded computing? Will RC be a niche solution in specific application areas or do you see this technology being used in general-purpose platforms that will be widely deployed?

George: Naturally, as a relatively new paradigm of computing, RC has started with emphasis in a few targeted areas, for example, aerospace and bioinformatics, where missions and users require dramatic improvement only possible by a revolutionary approach. As principal challenges — performance, productivity, and sustainability — become more pronounced, and as R&D in RC progresses, we believe that the RC paradigm will mature and expand in its role and influence to eventually become dominant in a broad range of applications, from satellites to servers to supercomputers. We are already witnessing this trend in several sectors of high-performance embedded computing. For example, in advanced computing on space missions, high performance and versatility are critical with limited energy, size, and weight. NASA, DOD, and other space-related agencies worldwide are increasingly featuring RC technologies in their platforms, as is the aerospace community in general. The driving issues in this community — again performance, productivity, and especially sustainability — are becoming increasingly important in HPC.

HPCwire: In the past couple of years, non-RC accelerators like the Cell processor and now, especially, general-purpose GPUs have been making big news in the HPC world, with major deployments planned. What has held back reconfigurable computing technology in this application space?

George: There are several reasons why Cell and GPU accelerators are more popular in HPC at present. Perhaps most obviously, they are viewed as inexpensive, due to leveraging of the gaming market. Vendors have invested heavily, both marketing and R&D, to broaden the appeal of these devices for the HPC community. Moreover, in terms of fundamental computing principles, they are an evolutionary development in device architecture, and as such represent less risk. However, we believe that inherent weaknesses of any fixed-logic device technology … in terms of broad applicability at speed and energy efficiency, will eventually become limiting factors.

By contrast, reconfigurable computing is a relatively new and immature paradigm of computing. Like any new paradigm, there are R&D challenges that must be solved before it can become more broadly applicable and eventually ubiquitous. With fixed-logic computing, the user and application have no control over underlying hardware parallelism; they simply attempt to exploit as much as the manufacturer has deemed to provide. With reconfigurable-logic computing, the user and application define the hardware parallelism, featuring wide and deep parallelism as appropriate, with selectable precision, optimized data paths, etc., up to the limits of total device capacity. This tremendous advantage in parallel computing potency comes with the challenge of complexity. Thus, as is natural for any new paradigm and set of technologies, design productivity is an important challenge at present for RC in general and FPGA devices in particular, so that HPC users, and others, can take full advantage without having to be trained as electrical engineers.

It should be noted that this life-cycle is commonplace in the history of technology. An established technology is dominant for many years; it experiences growth over a long period of time from evolutionary advances, and one day it is partially or wholly supplanted by a new, revolutionary technology, but only after that new technology has navigated a long and winding road of research and development. Productivity is often a key challenge for a new IT technology, learning how to effectively harness and exploit the inherent advantages of the new approach.

HPCwire: What do you see on the horizon that could propel reconfigurable computing into a more mainstream role?

George: There are two major factors on the horizon that we believe will dramatically change the landscape. One factor is the trend for performance, productivity, and sustainability borne by growing concerns with conventional technologies about speed versus energy consumption, which increasingly favors RC. The conventional model of computing with fixed-logic multicore devices is limiting in terms of performance per unit of energy as compared to reconfigurable-logic devices. However, RC is viewed by many as lagging in effective concepts and tools for application development by domain scientists and other users to harness this potency without special skills. Thus, the second factor is taming this new paradigm of computing and innovations in its technologies, so that it is amenable to a broader range of users. In this regard, many vendors and research groups are conducting R&D and developing new concepts, tools, and products to address this challenge. In the future, RC will become more important for a growing set of missions, applications, and users and, concomitantly, it will become more amenable to them, so that productivity is maximized alongside performance and sustainability.

HPCwire: The new Novo-G reconfigurable computing system at the NSF Center for High-Performance Reconfigurable Computing (CHREC) has been up and running for just a few months. Can you tell us about the machine and what you hope to accomplish with it?

George: Novo-G became operational in July of this year and is believed to be the most powerful RC machine ever fielded for research. Its size, cooling and power consumption are modest by HPC standards, but they hide its computational superiority. For example, in our first application experiment working with domain scientists in computational biology, performance was sustained with 96 FPGAs that matched that of the largest machines on the NSF TeraGrid, yet provided by a machine that is hundreds of times lower in cost, power, cooling, size, etc.

Housed in three racks, Novo-G consists of 24 standard Linux servers, plus a head node, connected by DDR InfiniBand and GigE. Each server features a tightly-coupled set of four FPGA accelerators on a ProcStar-III PCIe board from GiDEL supported by a conventional multicore CPU, motherboard, disk, etc. Each FPGA is a Stratix-III E260 device from Altera with 254K logic elements, 768 18×18 multipliers, and more than 4GB of DDR2 memory directly attached via three banks. Altogether, Novo-G features 96 of these FPGAs, with an upgrade underway that by January will double its RC capacity to 192 FPGAs via two coupled RC boards per server.

The purpose of Novo-G is to support a variety of research projects in CHREC related to RC performance, productivity and sustainability. Founded in 2007, CHREC is a national research center under the auspices of the I/UCRC program of the National Science Foundation and consists of more than 30 academic, industry and government partners working collaboratively on research in this field. In addition, several new collaborations have been inspired by Novo-G, with other research groups, for example, Boston University and the Air Force Research Laboratory, as well as tools vendors such as Impulse Accelerated Technologies and Mitrionics.

HPCwire: Can you talk about a few of the projects at CHREC that look especially promising?

George: On-going research projects at the four university sites of CHREC — the University of Florida, Brigham Young University led by Dr. Brent Nelson, George Washington University led by Dr. Tarek El-Ghazawi, and Virginia Tech led by Dr. Peter Athanas — fall into four categories: productivity, architecture, partial reconfiguration, and fault tolerance. In the area of productivity, several projects are underway, crafting novel concepts for design of RC applications and systems, including new methods and tools for design formulation and prediction, hardware virtualization, module and core reuse, design verification and optimization, and programming with high-level languages. With respect to architecture, researchers are working to characterize and optimize new and emerging devices — both fixed and reconfigurable logic — and systems, as well as methods to promote autonomous hardware reconfiguration. Both of these project areas of productivity and architecture relate well to HPC.

Meanwhile, one of the unique features of some RC devices is their ability to reconfigure portions of the hardware of the chip while other portions remain unchanged and thus operational, and this powerful feature involves many research and design challenges being studied and addressed by several teams. Last but not least, as process densities increase and become more susceptible to faults, environments become harsher, and resources become more prone to soft or hard errors, research challenges arise in fault tolerance. In this area, CHREC researchers are developing device- and system-level RC concepts and architectures to support scenarios that require high performance, versatility, and reliability with low power, cooling, and size, be it for outer space or the HPC computer room.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate scientists the ability to use machine learning to identify e Read more…

By Rob Farber

Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders

March 16, 2018

Activist investor Starboard Value has been exerting pressure on Mellanox Technologies to increase its returns. In response, the high-performance networking company on Monday, March 12, published a letter to shareholders outlining its proposal for a May 2018 extraordinary general meeting (EGM) of shareholders and highlighting its long-term growth strategy and focus on operating margin improvement. Read more…

By Staff

Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard

March 15, 2018

Quantum is coming. Maybe not today, maybe not tomorrow, but soon enough. Within 10 to 12 years, we’re told, special-purpose quantum systems will enter the commercial realm. Assuming this happens, we can also assume that quantum will, over extended time, become increasingly general purpose as it delivers mind-blowing power. Read more…

By Doug Black

HPE Extreme Performance Solutions

Achieve Optimal Performance at Scale with High Performance Fabrics for HPC

High Performance Computing (HPC) is unlocking a new era of speed and productivity to fuel business transformation. Rapid advancements in HPC capabilities are helping organizations operate faster and more effectively than ever, but in today’s fast-paced marketplace, a new generation of technologies is required to reach greater scalability and cost-efficiency. Read more…

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise IT in its willingness to outsource computational power. The m Read more…

By Chris Downing

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Stephen Hawking, Legendary Scientist, Dies at 76

March 14, 2018

Stephen Hawking passed away at his home in Cambridge, England, in the early morning of March 14; he was 76. Born on January 8, 1942, Hawking was an English theo Read more…

By Tiffany Trader

Hyperion Tackles Elusive Quantum Computing Landscape

March 13, 2018

Quantum computing - exciting and off-putting all at once - is a kaleidoscope of technology and market questions whose shapes and positions are far from settled. Read more…

By John Russell

Part Two: Navigating Life Sciences Choppy HPC Waters in 2018

March 8, 2018

2017 was not necessarily the best year to build a large HPC system for life sciences say Ari Berman, VP and GM of consulting services, and Aaron Gardner, direct Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

SciNet Launches Niagara, Canada’s Fastest Supercomputer

March 5, 2018

SciNet and the University of Toronto today unveiled "Niagara," Canada's most-powerful supercomputer, comprising 1,500 dense Lenovo ThinkSystem SD530 high-perfor Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

World Record: Quantum Computer with 46 Qubits Simulated

December 18, 2017

Scientists from the Jülich Supercomputing Centre have set a new world record. Together with researchers from Wuhan University and the University of Groningen, Read more…

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This