Reconfigurable Computing Research Pushes Forward

By Nicole Hemsoth

November 20, 2009

Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. The main advantage of reconfigurable computing, or RC for short, is that programmers are able to change the circuitry of the chip on the fly. Thus, in theory, the hardware can be matched to the software, rather than the other way around. While there are a handful of commercial offerings from companies such as Convey Computer, XtremeData, GiDel, Mitrionics, and Impulse Accelerated Technologies, RC is still an area of active research.

In the U.S., the NSF Center for High-Performance Reconfigurable Computing (CHREC, pronounced “shreck”), acts as the research hub for RC, bringing together more than 30 organizations in this field. CHREC is run by Dr. Alan George, who gave an address at the SC09 Workshop on High-Performance Reconfigurable Computing Technology and Applications (HPRCTA’09) on November 15. We got the opportunity to ask Dr. George about the work going on at the Center and what he thinks RC technology can offer to high performance computing users.

HPCwire: FPGA-based reconfigurable computing has captured some loyal followers in the HPC community. What are the advantages of FPGAs for high-performance computing compared to fixed-logic architectures such as CPUs, GPUs, the Cell processor?

Alan George: HPC is approaching a crossroads in terms of enabling technologies and their inherent strengths and weaknesses. Goals and challenges in three principal areas are vitally important yet increasingly in conflict: performance, productivity, and sustainability. For example, HPC machines lauded in the upper tier of the TOP500 list as most powerful in the world are remarkably high in performance yet also remarkably massive in size, energy, heat, and cost, all featuring programmable, fixed-logic devices, for example, CPU, GPU, Cell. Meanwhile, throughout society, energy cost, source, and availability are a growing concern. As life-cycle costs of energy and cooling rise to approach and exceed that of software and hardware in total cost of ownership, these technologies may become unsustainable.

By contrast, numerous research studies show that computing with reconfigurable-logic devices — FPGAs, et al. — is fundamentally superior in terms of speed and energy, due to the many advantages of adaptive, customizable hardware parallelism. Common sense confirms this comparison. Programmable fixed-logic devices no matter their form feature a “one size fits all” or “Jack of all trades” philosophy, with a predefined structure of parallelism, yet attempting to support all applications or some major subset. In contrast, the structure of parallelism in reconfigurable-logic devices can be customized, that is, reconfigured, for each application or task on the fly, being versatile yet optimized specifically for each problem at hand. With this perspective, fixed-logic computing and accelerators are following a more evolutionary path, whereas RC is relatively new and revolutionary.

It should be noted that RC, as a new paradigm of computing, is broader than FPGA acceleration for HPC. FPGA devices are the leading commercial technology available today that is capable of RC, albeit not originally designed for RC, and thus FPGAs are the focal point for virtually all experimental research and commercial deployments, with a growing list of success stories. However, looking ahead more broadly, reconfigurable logic may be featured in future devices with a variety of structures, granularities, functionalities, etc., perhaps very similar to today’s FPGAs or perhaps quite different.

HPCwire: What role, or roles, do you see for RC technology in high performance computing and high performance embedded computing? Will RC be a niche solution in specific application areas or do you see this technology being used in general-purpose platforms that will be widely deployed?

George: Naturally, as a relatively new paradigm of computing, RC has started with emphasis in a few targeted areas, for example, aerospace and bioinformatics, where missions and users require dramatic improvement only possible by a revolutionary approach. As principal challenges — performance, productivity, and sustainability — become more pronounced, and as R&D in RC progresses, we believe that the RC paradigm will mature and expand in its role and influence to eventually become dominant in a broad range of applications, from satellites to servers to supercomputers. We are already witnessing this trend in several sectors of high-performance embedded computing. For example, in advanced computing on space missions, high performance and versatility are critical with limited energy, size, and weight. NASA, DOD, and other space-related agencies worldwide are increasingly featuring RC technologies in their platforms, as is the aerospace community in general. The driving issues in this community — again performance, productivity, and especially sustainability — are becoming increasingly important in HPC.

HPCwire: In the past couple of years, non-RC accelerators like the Cell processor and now, especially, general-purpose GPUs have been making big news in the HPC world, with major deployments planned. What has held back reconfigurable computing technology in this application space?

George: There are several reasons why Cell and GPU accelerators are more popular in HPC at present. Perhaps most obviously, they are viewed as inexpensive, due to leveraging of the gaming market. Vendors have invested heavily, both marketing and R&D, to broaden the appeal of these devices for the HPC community. Moreover, in terms of fundamental computing principles, they are an evolutionary development in device architecture, and as such represent less risk. However, we believe that inherent weaknesses of any fixed-logic device technology … in terms of broad applicability at speed and energy efficiency, will eventually become limiting factors.

By contrast, reconfigurable computing is a relatively new and immature paradigm of computing. Like any new paradigm, there are R&D challenges that must be solved before it can become more broadly applicable and eventually ubiquitous. With fixed-logic computing, the user and application have no control over underlying hardware parallelism; they simply attempt to exploit as much as the manufacturer has deemed to provide. With reconfigurable-logic computing, the user and application define the hardware parallelism, featuring wide and deep parallelism as appropriate, with selectable precision, optimized data paths, etc., up to the limits of total device capacity. This tremendous advantage in parallel computing potency comes with the challenge of complexity. Thus, as is natural for any new paradigm and set of technologies, design productivity is an important challenge at present for RC in general and FPGA devices in particular, so that HPC users, and others, can take full advantage without having to be trained as electrical engineers.

It should be noted that this life-cycle is commonplace in the history of technology. An established technology is dominant for many years; it experiences growth over a long period of time from evolutionary advances, and one day it is partially or wholly supplanted by a new, revolutionary technology, but only after that new technology has navigated a long and winding road of research and development. Productivity is often a key challenge for a new IT technology, learning how to effectively harness and exploit the inherent advantages of the new approach.

HPCwire: What do you see on the horizon that could propel reconfigurable computing into a more mainstream role?

George: There are two major factors on the horizon that we believe will dramatically change the landscape. One factor is the trend for performance, productivity, and sustainability borne by growing concerns with conventional technologies about speed versus energy consumption, which increasingly favors RC. The conventional model of computing with fixed-logic multicore devices is limiting in terms of performance per unit of energy as compared to reconfigurable-logic devices. However, RC is viewed by many as lagging in effective concepts and tools for application development by domain scientists and other users to harness this potency without special skills. Thus, the second factor is taming this new paradigm of computing and innovations in its technologies, so that it is amenable to a broader range of users. In this regard, many vendors and research groups are conducting R&D and developing new concepts, tools, and products to address this challenge. In the future, RC will become more important for a growing set of missions, applications, and users and, concomitantly, it will become more amenable to them, so that productivity is maximized alongside performance and sustainability.

HPCwire: The new Novo-G reconfigurable computing system at the NSF Center for High-Performance Reconfigurable Computing (CHREC) has been up and running for just a few months. Can you tell us about the machine and what you hope to accomplish with it?

George: Novo-G became operational in July of this year and is believed to be the most powerful RC machine ever fielded for research. Its size, cooling and power consumption are modest by HPC standards, but they hide its computational superiority. For example, in our first application experiment working with domain scientists in computational biology, performance was sustained with 96 FPGAs that matched that of the largest machines on the NSF TeraGrid, yet provided by a machine that is hundreds of times lower in cost, power, cooling, size, etc.

Housed in three racks, Novo-G consists of 24 standard Linux servers, plus a head node, connected by DDR InfiniBand and GigE. Each server features a tightly-coupled set of four FPGA accelerators on a ProcStar-III PCIe board from GiDEL supported by a conventional multicore CPU, motherboard, disk, etc. Each FPGA is a Stratix-III E260 device from Altera with 254K logic elements, 768 18×18 multipliers, and more than 4GB of DDR2 memory directly attached via three banks. Altogether, Novo-G features 96 of these FPGAs, with an upgrade underway that by January will double its RC capacity to 192 FPGAs via two coupled RC boards per server.

The purpose of Novo-G is to support a variety of research projects in CHREC related to RC performance, productivity and sustainability. Founded in 2007, CHREC is a national research center under the auspices of the I/UCRC program of the National Science Foundation and consists of more than 30 academic, industry and government partners working collaboratively on research in this field. In addition, several new collaborations have been inspired by Novo-G, with other research groups, for example, Boston University and the Air Force Research Laboratory, as well as tools vendors such as Impulse Accelerated Technologies and Mitrionics.

HPCwire: Can you talk about a few of the projects at CHREC that look especially promising?

George: On-going research projects at the four university sites of CHREC — the University of Florida, Brigham Young University led by Dr. Brent Nelson, George Washington University led by Dr. Tarek El-Ghazawi, and Virginia Tech led by Dr. Peter Athanas — fall into four categories: productivity, architecture, partial reconfiguration, and fault tolerance. In the area of productivity, several projects are underway, crafting novel concepts for design of RC applications and systems, including new methods and tools for design formulation and prediction, hardware virtualization, module and core reuse, design verification and optimization, and programming with high-level languages. With respect to architecture, researchers are working to characterize and optimize new and emerging devices — both fixed and reconfigurable logic — and systems, as well as methods to promote autonomous hardware reconfiguration. Both of these project areas of productivity and architecture relate well to HPC.

Meanwhile, one of the unique features of some RC devices is their ability to reconfigure portions of the hardware of the chip while other portions remain unchanged and thus operational, and this powerful feature involves many research and design challenges being studied and addressed by several teams. Last but not least, as process densities increase and become more susceptible to faults, environments become harsher, and resources become more prone to soft or hard errors, research challenges arise in fault tolerance. In this area, CHREC researchers are developing device- and system-level RC concepts and architectures to support scenarios that require high performance, versatility, and reliability with low power, cooling, and size, be it for outer space or the HPC computer room.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Rabies, Smog, Robots & More

October 14, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ll get there at last month’s MIT-IBM Watson AI Lab’s AI Read more…

By John Russell

Summit Simulates Braking – on Mars

October 14, 2019

NASA is planning to send humans to Mars by the 2030s – and landing on the surface will be considerably trickier than landing a rover like Curiosity. To solve the problem, NASA researchers are using the world’s fastes Read more…

By Staff report

Chaminade University’s Immersion Program Builds Capacity for Data Science in Hawaii, Pacific Region

October 10, 2019

Kuleana is a uniquely Hawaiian value and practice which embodies responsibility to self, community, and the ‘aina' (land). At Chaminade University, a federally designated Native Hawaiian serving university in Hawai‘i Read more…

By Faith Singer-Villalobos

Trovares Drives Memory-Driven, Property Graph Analytics Strategy with HPE

October 10, 2019

Trovares, a high performance property graph analytics company, has partnered with HPE and its Superdome Flex memory-driven servers on a cybersecurity capability the companies say “routinely” runs near-time workloads on 24TB-capacity systems... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

HPC in the Cloud: Avoid These Common Pitfalls

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community.]

It seems that everyone is experimenting about cloud computing. Read more…

Intel, Lenovo Join Forces on HPC Cluster for Flatiron

October 9, 2019

An HPC cluster with deep learning techniques will be used to process petabytes of scientific data as part of workload-intensive projects spanning astrophysics to genomics. AI partners Intel and Lenovo said they are providing... Read more…

By George Leopold

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Summit Simulates Braking – on Mars

October 14, 2019

NASA is planning to send humans to Mars by the 2030s – and landing on the surface will be considerably trickier than landing a rover like Curiosity. To solve Read more…

By Staff report

Trovares Drives Memory-Driven, Property Graph Analytics Strategy with HPE

October 10, 2019

Trovares, a high performance property graph analytics company, has partnered with HPE and its Superdome Flex memory-driven servers on a cybersecurity capability the companies say “routinely” runs near-time workloads on 24TB-capacity systems... Read more…

By Doug Black

Intel, Lenovo Join Forces on HPC Cluster for Flatiron

October 9, 2019

An HPC cluster with deep learning techniques will be used to process petabytes of scientific data as part of workload-intensive projects spanning astrophysics to genomics. AI partners Intel and Lenovo said they are providing... Read more…

By George Leopold

Optimizing Offshore Wind Farms with Supercomputer Simulations

October 9, 2019

Offshore wind farms offer a number of benefits; many of the areas with the strongest winds are located offshore, and siting wind farms offshore ameliorates many of the land use concerns associated with onshore wind farms. Some estimates say that, if leveraged, offshore wind power... Read more…

By Oliver Peckham

Harvard Deploys Cannon, New Lenovo Water-Cooled HPC Cluster

October 9, 2019

Harvard's Faculty of Arts & Sciences Research Computing (FASRC) center announced a refresh of their primary HPC resource. The new cluster, called Cannon after the pioneering American astronomer Annie Jump Cannon, is supplied by Lenovo... Read more…

By Tiffany Trader

NSF Announces New AI Program; Plans $120M in Funding Next Year

October 8, 2019

As the saying goes, when you’re hot, you’re hot. Right now, AI is scalding. Today the National Science Foundation announced a new AI initiative – The National Artificial Intelligence Research Institutes program – with plans to invest about “$120 million in grants next year... Read more…

By Staff report

DOE Sets Sights on Accelerating AI (and other) Technology Transfer

October 3, 2019

For the past two days DOE leaders along with ~350 members from academia and industry gathered in Chicago to discuss AI development and the ways in which industr Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Quantum Bits: Neven’s Law (Who Asked for That), D-Wave’s Steady Push, IBM’s Li-O2- Simulation

July 3, 2019

Quantum computing’s (QC) many-faceted R&D train keeps slogging ahead and recently Japan is taking a leading role. Yesterday D-Wave Systems announced it ha Read more…

By John Russell

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

ISC Keynote: Thomas Sterling’s Take on Whither HPC

June 20, 2019

Entertaining, insightful, and unafraid to launch the occasional verbal ICBM, HPC pioneer Thomas Sterling delivered his 16th annual closing keynote at ISC yesterday. He explored, among other things: exascale machinations; quantum’s bubbling money pot; Arm’s new HPC viability; Europe’s... Read more…

By John Russell

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This