Why Field Programmable Gate Arrays (FPGAs) are the Versatile Accelerator

By Bill Mannel

October 8, 2018

The invention and development of Central Processing Units (CPUs) have certainly played pivotal roles in the trajectory of human history. It is fair to say that Intel’s development of the CPU has led to the democratization of computing and enabled countless innovations, large and small.

As with all things, further specialization is possible. Acceleration of certain workloads may be achieved through continued specialization of processing units. Graphical Processing Units (GPUs), for example, were originally created to accelerate graphical-related workloads. GPUs are now being used for other tasks, such as bitcoin mining.

For clarity, let’s compare CPUs versus GPUs: A CPU is a general-purpose processor, designed to run a broad range of operations necessary for an entire system, such as IO or virtual memory. GPUs are more specifically designed for highly repetitive tasks that can be highly parallelized. Now on to discussing Field Programmable Gate Arrays.

Focusing on Field Programmable Gate Arrays (FPGAs)

While GPUs are good at what they do, their strengths are biased towards very particular types of processes. Another more versatile type of accelerator, Field Programmable Gate Arrays (FPGAs), has seen further development by Intel and offers customizable, gate-array based multi-functional acceleration. In fact, the FPGA is designed to actually be configured by a customer or designer after manufacturing; hence it is “field-programmable.” An FPGA offers high I/O bandwidth plus a fine-grained, flexible and custom parallelism, allowing it to be programmed for many different types of workloads, including Big Data analytics, financial services and deep learning. If a GPU is something like a hammer, an FPGA is like Doctor Who’s sonic screwdriver, an adaptable tool that can be used to solve many different types of problems.

HPE has teamed up with Intel to offer FPGA solutions based on HPE ProLiant DL Gen10 servers, including the HPE ProLiant DL360 and DL380 server platforms with Intel® Arria® 10 GX FPGAs. The HPE ProLiant DL360 offers a 1U dual processor dense compute server with exceptional flexibility and expandability, while the HPE ProLiant DL380 provides a 2U dual processor server with world-class performance and versatility for multiple workloads. HPE servers also offer a unique Silicon Root of Trust to protect against firmware-based cybersecurity threats. The combination of HPE servers with Intel FPGAs provides flexible, industrial-strength compute solutions that can be tuned for specific workloads.

One of the traditional difficulties with FPGAs has been the specialized nature of programming required. In many cases, this has rendered FPGA technology inaccessible to data scientists and application developers. Intel has developed the Acceleration Stack for Intel Xeon CPU with FPGAs to provide a common developer interface for both application and accelerator function developers, and includes drivers, Application Programming Interfaces (APIs) and an FPGA Interface Manager. Together with acceleration libraries and development tools, Intel’s Acceleration Stack enables developers to focus on the unique value-add of their solutions.

Intel has also open-sourced the Open Programmable Acceleration Engine (OPAE) technology, a software programming layer that provides a consistent API across Intel FPGA platforms. It is designed for minimal software overhead and latency, while providing an abstraction for hardware-specific FPGA resource details. OPAE is the default software stack for the Intel® Xeon® processor with both integrated and discrete FPGA devices.

How simplifying the programming for FPGAs plays directly to its strengths

An FPGA can be reprogrammed and updated with new algorithms for different workloads. This flexibility allows a single FPGA to accelerate many different workloads efficiently, and to support future applications without the need to change the hardware. For instance, a FPGA could handle one workload during the morning shift and a different workload during an evening shift. Programmability also allows FPGAs to stay abreast of evolving standards, such as networking protocols, and enables updates to maintain compliance when a standard is finalized—again, without having to respin the hardware.

An FPGA can also switch between multiple programs in real time to adapt to changing workloads. An example of this is with the Bigstream Acceleration solution. Bigstream accelerates Spark performance using its software solution in conjunction with an Intel FPGA. Bigstream will reconfigure the FPGA to best fit the dataflow to be processed, resulting in up to 8x performance acceleration for end-to-end applications, with a potential for higher acceleration in future releases.* This adaptability and flexibility of FPGAs effectively render them to a large extent future-proof, while also enhancing the ROI of the servers that use them by extending their lifecycle.

How performance gains enabled by FPGAs provide increased productivity and boosts ROI

Data demands on IT are continually increasing and relational databases and Microsoft SQL continue to be the backbone for enterprise-class data analytics. Swarm64 offers an innovative add-on to PostgreSQL, the S64 Data Accelerator for PostgreSQL (S64DA),  which delivers up to 4x data warehouse acceleration with no changes to the BI application. The S64DA solution is designed to significantly increase data processing and analytics performance for demanding workloads, using Intel FPGAs to overcome latency and bandwidth limitations of storage accessed via a network, either locally or from the cloud. Intel FPGAs can directly connect to networks, removing the need for data to go through processors and reducing overall system latency. Leveraging the highly parallel nature of FPGAs with optimized, workload-specific programming provides productivity gains for high value workloads.

How partners and solutions are leveraging FPGAs

Financial industry

Another example of how Intel FPGAs increase productivity is being delivered by Levyx through its Financial Risk Analytics Acceleration solution. By optimizing the performance of the underlying storage, Levyx helps offload compute-intensive functions directly onto FPGAs for faster processing in previously time-consuming and resource-intensive large-scale operations like stock/options financial algorithm backtesting at financial institutions. Backtesting is a highly parallel, data- and compute-intensive simulation workload with large multi-terabyte datasets. Backtesting is used to test thousands of trading models to find those that have been historically profitable to determine the best trading practices to maximize current and future profitability.

To stay ahead of the competition, the models must continually evolve and be rapidly evaluated for algorithmic trading success. The efficacy of these models can have a significant impact on trading revenues at capital markets firms, including money-center banks, large hedge funds and trading exchanges. Levyx effectively allows critical backtesting functions to be performed 851% faster than competing solutions.** With these low-latency, compute-intensive workloads and massive data sets, the performance, flexibility and programmability of Intel FPGAs have a direct impact on the productivity and revenue of Levyx customers.

Power savings

Since FPGAs can be optimized for specific workloads, the resulting efficiency leads to lower power consumption. This decreased power consumption allows FPGAs to be added to existing infrastructure to increase performance, while minimizing the amount of extraneous space or power required. Since lower power consumptions reduces heat within the data center, additional savings are gained by minimizing the overall power needed for a given performance level. When these power savings are multiplied over the entire data center, with attendant reduced power and cooling costs, FPGAs clearly help to minimize TCO by reducing OPEX.

AI and deep learning

In the rapidly developing field of AI and deep learning, FPGAs are being recognized as a solution for inferencing, which is essentially the application of deep learning training. In the training cycle, a neural network model is “taught” how to recognize a pattern, like cats. Inferencing occurs when the network is shown an image, and it signals whether the image is a cat or not. In other words, training develops the model while inferencing is the runtime application of the model.

Inferencing requires low-latency performance, efficiency and flexibility. FPGAs offer a highly parallel architecture coupled with high-bandwidth memory to provide the low-latency performance required for real-time inferencing. FPGAs effectively implement software algorithms in hardware for optimized performance, but also provide the energy efficiency to minimize deployment power requirements. In general, inferencing of a model is a specific task, including facial recognition and language translation, which maps well to the strengths of FPGAs.

Accelerating business-critical workloads with FPGA solutions

The collaboration between HPE and Intel provides industrial-strength FPGA solutions that accelerate business-critical workloads. The supporting software ecosystem is developing at a rapid enough pace to be able to continuously add value to customers in an ever-expanding range of uses cases. The performance, adaptability and power efficiency of FPGAs serve to increase productivity and drive innovation—with rapid ROI and minimized TCO.

Learn more about FPGA solutions

For further information, please visit the Intel FPGA Acceleration Hub.

See HPE FPGA solutions at HPE-Cast Japan, the HPE HPC and AI Forum held on September 7. (Note: The HP-Cast web page is in Japanese.)

HPE Resources

Follow @HPE_HPC

HPE on Facebook

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

NSF Launches Quantum Computing Faculty Fellows Program

October 22, 2018

Efforts to expand quantum computing research capacity continue to accelerate. The National Science Foundation today announced a Quantum Computing & Information Science Faculty Fellows (QCIS-FF) program aimed at devel Read more…

By John Russell

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Join IBM at SC18 and Learn to Harness the Next Generation of AI-focused Supercomputing

Blurring the lines between HPC and AI

Today’s high performance computers are helping clients gain insights at an unprecedented pace. The intersection of artificial intelligence (AI) and HPC can transform industries while solving some of the world’s toughest challenges. Read more…

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phase, on the near side of a difficult chasm to cross. In respon Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

Federal Investment in Exascale – What It Really Means

October 10, 2018

Earlier this month, the EuroHPC JU (Joint Undertaking) reached critical mass, and it seems all EU and affiliated member states, bar the UK (unsurprisingly), have or will sign on. The EuroHPC JU was born from a recognition that individual EU member states, and the EU as a whole, were significantly underinvesting in HPC compared to the US, China and Japan, who all have their own exascale investment and delivery strategies (NSCI, 13th 5 Year Plan, Post-K, etc). Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This