Reconfigurable Computing Research Pushes Forward

By Nicole Hemsoth

November 20, 2009

Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. The main advantage of reconfigurable computing, or RC for short, is that programmers are able to change the circuitry of the chip on the fly. Thus, in theory, the hardware can be matched to the software, rather than the other way around. While there are a handful of commercial offerings from companies such as Convey Computer, XtremeData, GiDel, Mitrionics, and Impulse Accelerated Technologies, RC is still an area of active research.

In the U.S., the NSF Center for High-Performance Reconfigurable Computing (CHREC, pronounced “shreck”), acts as the research hub for RC, bringing together more than 30 organizations in this field. CHREC is run by Dr. Alan George, who gave an address at the SC09 Workshop on High-Performance Reconfigurable Computing Technology and Applications (HPRCTA’09) on November 15. We got the opportunity to ask Dr. George about the work going on at the Center and what he thinks RC technology can offer to high performance computing users.

HPCwire: FPGA-based reconfigurable computing has captured some loyal followers in the HPC community. What are the advantages of FPGAs for high-performance computing compared to fixed-logic architectures such as CPUs, GPUs, the Cell processor?

Alan George: HPC is approaching a crossroads in terms of enabling technologies and their inherent strengths and weaknesses. Goals and challenges in three principal areas are vitally important yet increasingly in conflict: performance, productivity, and sustainability. For example, HPC machines lauded in the upper tier of the TOP500 list as most powerful in the world are remarkably high in performance yet also remarkably massive in size, energy, heat, and cost, all featuring programmable, fixed-logic devices, for example, CPU, GPU, Cell. Meanwhile, throughout society, energy cost, source, and availability are a growing concern. As life-cycle costs of energy and cooling rise to approach and exceed that of software and hardware in total cost of ownership, these technologies may become unsustainable.

By contrast, numerous research studies show that computing with reconfigurable-logic devices — FPGAs, et al. — is fundamentally superior in terms of speed and energy, due to the many advantages of adaptive, customizable hardware parallelism. Common sense confirms this comparison. Programmable fixed-logic devices no matter their form feature a “one size fits all” or “Jack of all trades” philosophy, with a predefined structure of parallelism, yet attempting to support all applications or some major subset. In contrast, the structure of parallelism in reconfigurable-logic devices can be customized, that is, reconfigured, for each application or task on the fly, being versatile yet optimized specifically for each problem at hand. With this perspective, fixed-logic computing and accelerators are following a more evolutionary path, whereas RC is relatively new and revolutionary.

It should be noted that RC, as a new paradigm of computing, is broader than FPGA acceleration for HPC. FPGA devices are the leading commercial technology available today that is capable of RC, albeit not originally designed for RC, and thus FPGAs are the focal point for virtually all experimental research and commercial deployments, with a growing list of success stories. However, looking ahead more broadly, reconfigurable logic may be featured in future devices with a variety of structures, granularities, functionalities, etc., perhaps very similar to today’s FPGAs or perhaps quite different.

HPCwire: What role, or roles, do you see for RC technology in high performance computing and high performance embedded computing? Will RC be a niche solution in specific application areas or do you see this technology being used in general-purpose platforms that will be widely deployed?

George: Naturally, as a relatively new paradigm of computing, RC has started with emphasis in a few targeted areas, for example, aerospace and bioinformatics, where missions and users require dramatic improvement only possible by a revolutionary approach. As principal challenges — performance, productivity, and sustainability — become more pronounced, and as R&D in RC progresses, we believe that the RC paradigm will mature and expand in its role and influence to eventually become dominant in a broad range of applications, from satellites to servers to supercomputers. We are already witnessing this trend in several sectors of high-performance embedded computing. For example, in advanced computing on space missions, high performance and versatility are critical with limited energy, size, and weight. NASA, DOD, and other space-related agencies worldwide are increasingly featuring RC technologies in their platforms, as is the aerospace community in general. The driving issues in this community — again performance, productivity, and especially sustainability — are becoming increasingly important in HPC.

HPCwire: In the past couple of years, non-RC accelerators like the Cell processor and now, especially, general-purpose GPUs have been making big news in the HPC world, with major deployments planned. What has held back reconfigurable computing technology in this application space?

George: There are several reasons why Cell and GPU accelerators are more popular in HPC at present. Perhaps most obviously, they are viewed as inexpensive, due to leveraging of the gaming market. Vendors have invested heavily, both marketing and R&D, to broaden the appeal of these devices for the HPC community. Moreover, in terms of fundamental computing principles, they are an evolutionary development in device architecture, and as such represent less risk. However, we believe that inherent weaknesses of any fixed-logic device technology … in terms of broad applicability at speed and energy efficiency, will eventually become limiting factors.

By contrast, reconfigurable computing is a relatively new and immature paradigm of computing. Like any new paradigm, there are R&D challenges that must be solved before it can become more broadly applicable and eventually ubiquitous. With fixed-logic computing, the user and application have no control over underlying hardware parallelism; they simply attempt to exploit as much as the manufacturer has deemed to provide. With reconfigurable-logic computing, the user and application define the hardware parallelism, featuring wide and deep parallelism as appropriate, with selectable precision, optimized data paths, etc., up to the limits of total device capacity. This tremendous advantage in parallel computing potency comes with the challenge of complexity. Thus, as is natural for any new paradigm and set of technologies, design productivity is an important challenge at present for RC in general and FPGA devices in particular, so that HPC users, and others, can take full advantage without having to be trained as electrical engineers.

It should be noted that this life-cycle is commonplace in the history of technology. An established technology is dominant for many years; it experiences growth over a long period of time from evolutionary advances, and one day it is partially or wholly supplanted by a new, revolutionary technology, but only after that new technology has navigated a long and winding road of research and development. Productivity is often a key challenge for a new IT technology, learning how to effectively harness and exploit the inherent advantages of the new approach.

HPCwire: What do you see on the horizon that could propel reconfigurable computing into a more mainstream role?

George: There are two major factors on the horizon that we believe will dramatically change the landscape. One factor is the trend for performance, productivity, and sustainability borne by growing concerns with conventional technologies about speed versus energy consumption, which increasingly favors RC. The conventional model of computing with fixed-logic multicore devices is limiting in terms of performance per unit of energy as compared to reconfigurable-logic devices. However, RC is viewed by many as lagging in effective concepts and tools for application development by domain scientists and other users to harness this potency without special skills. Thus, the second factor is taming this new paradigm of computing and innovations in its technologies, so that it is amenable to a broader range of users. In this regard, many vendors and research groups are conducting R&D and developing new concepts, tools, and products to address this challenge. In the future, RC will become more important for a growing set of missions, applications, and users and, concomitantly, it will become more amenable to them, so that productivity is maximized alongside performance and sustainability.

HPCwire: The new Novo-G reconfigurable computing system at the NSF Center for High-Performance Reconfigurable Computing (CHREC) has been up and running for just a few months. Can you tell us about the machine and what you hope to accomplish with it?

George: Novo-G became operational in July of this year and is believed to be the most powerful RC machine ever fielded for research. Its size, cooling and power consumption are modest by HPC standards, but they hide its computational superiority. For example, in our first application experiment working with domain scientists in computational biology, performance was sustained with 96 FPGAs that matched that of the largest machines on the NSF TeraGrid, yet provided by a machine that is hundreds of times lower in cost, power, cooling, size, etc.

Housed in three racks, Novo-G consists of 24 standard Linux servers, plus a head node, connected by DDR InfiniBand and GigE. Each server features a tightly-coupled set of four FPGA accelerators on a ProcStar-III PCIe board from GiDEL supported by a conventional multicore CPU, motherboard, disk, etc. Each FPGA is a Stratix-III E260 device from Altera with 254K logic elements, 768 18×18 multipliers, and more than 4GB of DDR2 memory directly attached via three banks. Altogether, Novo-G features 96 of these FPGAs, with an upgrade underway that by January will double its RC capacity to 192 FPGAs via two coupled RC boards per server.

The purpose of Novo-G is to support a variety of research projects in CHREC related to RC performance, productivity and sustainability. Founded in 2007, CHREC is a national research center under the auspices of the I/UCRC program of the National Science Foundation and consists of more than 30 academic, industry and government partners working collaboratively on research in this field. In addition, several new collaborations have been inspired by Novo-G, with other research groups, for example, Boston University and the Air Force Research Laboratory, as well as tools vendors such as Impulse Accelerated Technologies and Mitrionics.

HPCwire: Can you talk about a few of the projects at CHREC that look especially promising?

George: On-going research projects at the four university sites of CHREC — the University of Florida, Brigham Young University led by Dr. Brent Nelson, George Washington University led by Dr. Tarek El-Ghazawi, and Virginia Tech led by Dr. Peter Athanas — fall into four categories: productivity, architecture, partial reconfiguration, and fault tolerance. In the area of productivity, several projects are underway, crafting novel concepts for design of RC applications and systems, including new methods and tools for design formulation and prediction, hardware virtualization, module and core reuse, design verification and optimization, and programming with high-level languages. With respect to architecture, researchers are working to characterize and optimize new and emerging devices — both fixed and reconfigurable logic — and systems, as well as methods to promote autonomous hardware reconfiguration. Both of these project areas of productivity and architecture relate well to HPC.

Meanwhile, one of the unique features of some RC devices is their ability to reconfigure portions of the hardware of the chip while other portions remain unchanged and thus operational, and this powerful feature involves many research and design challenges being studied and addressed by several teams. Last but not least, as process densities increase and become more susceptible to faults, environments become harsher, and resources become more prone to soft or hard errors, research challenges arise in fault tolerance. In this area, CHREC researchers are developing device- and system-level RC concepts and architectures to support scenarios that require high performance, versatility, and reliability with low power, cooling, and size, be it for outer space or the HPC computer room.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Dell EMC will Build OzStar – Swinburne’s New Supercomputer to Study Gravity

August 16, 2017

Dell EMC announced yesterday it is building a new supercomputer – the OzStar – for the Swinburne University of Technology (Australia) in support the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system into space aboard the SpaceX Dragon Spacecraft to explore if Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based system on the STREAM benchmark and on a test case running ANS Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capa Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Leading Solution Providers

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This