The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter acceleration needs are concerned – has compelled IT managers and chipmakers to venture out, increasingly, beyond CPUs and GPUs. The “inherent parallelism” of FPGAs (see below) to handle specialized workloads in AI- and HPDA-related implementations has brought on greater investments from IT decision makers and vendors, who see increasing justification for the challenge of FPGA programming. Of course, adoption of unfamiliar technologies is always painful and slow, particularly those without a built-out ecosystem of frameworks and APIs that simplify their use.
Why are FPGAs bursting out of their communication, industrial and military niches and into the datacenter? Partly because of the limits of CPUs, which have their roots on the desktop and were, said Steve Conway, senior research VP at Hyperion Research, never really intended for advanced computing. In time-critical HPDA and AI workload environments, the CPU is out of its league.
“They’re very economical, but kind of a loose fit for a lot of (high performance computing),” said Conway, “loose enough that they’ve left room for other kinds of processors to kind of fill in the gaps where x86 doesn’t really excel.”
The good news is new technologies are coming on line that ease FPGA integration with other parts of the datacenter technology stack, including the CPU workhorse (see below).
Another part of the answer is the FPGAs’ aforementioned parallelism. “Parallel processing” usually means breaking up a workload into pieces and assigning a high number of CPU nodes to run the job in parallel. But the CPU itself processes sequentially, so that algorithmic problems are distributed into a series of operations and executed in a sequence.
That’s inherently slower than FPGAs, which can do parallel processing within the chip itself: algorithmic problems can be distributed so that multiple parallel operations happen simultaneously, executing the algorithm all at one time.
With the deluge of data flooding in from devices and sensors (we hear it said that half the world’s data has been generated in the last two years; only 2 percent of data has been analyzed), FPGAs are particularly suited for real/near time streaming data analytics. Here’s how Prabhat K. Gupta, founder and CEO of Megh Computing, a provider of real-time analytics acceleration for retail, finance and communication companies, put it recently in an article (“Why CTOs Should Reconsider FPGAs”) published in sister publication Datanami.
As enterprises transition from reliance on traditional business intelligence to advanced analytics via machine learning and deep learning, the demands on computing infrastructure increase exponentially. These high volumes of streaming data require new levels of performance, including lower latency and higher throughput. In this competitive environment, CTOs must step up their infrastructure and use the most efficient tools and programs available in order to differentiate their enterprises. However, many CTOs are overlooking a key technology needed to do so: field-programmable gate array (FPGA)-based accelerators.
While more organizations embrace “real-time analytics solutions based on a software platform using Kafka, Flink or similar frameworks for streaming the data, and Spark Streaming framework as a distributed framework for processing the data,” Gupta said, they’re encountering scaling problems as they “expand the number of nodes in use to deal with the influx of data… These open source or proprietary software-only solutions cannot keep pace with increasing computational demands and low latencies required to support real-time analytics use cases.”
The problem with FPGAs is programming. Patrick Moorhead, president and principal analyst, Moor Insights & Strategy, told us that in a left-to-right list of processor architectures, by degree of programming difficulty, the order would be: CPUs, GPUs, FPGAs and ASICs. FPGA programming is difficult, in part, because they are re-programmable – but this also is a strength when used for workloads, such as 5G, in which standards have not yet settled. “You can program an FPGA to do anything, they’re perfect for environments that change.”
Leading the FPGA adoption charge are Intel and Xilinx – not only by advancing FPGA performance but by developing, by themselves and within consortia, the support technologies that ease FPGAs integration and programmability. Intel jumped into the FPGA arena in 2015 with its acquisition of Altera for $17 billion, and proceeded to build on Altera technology as it developed its FPGA-related offering. Now, with the launch last week of its Agilex product line, the company has introduced its first Intel-built FPGA processor. It combines FPGA fabric built on Intel’s 10nm process with its heterogeneous 3D SiP technology that enables integration of analog, memory, custom computing, custom I/O and Intel eASIC device tiles into a single package with the FPGA fabric. The idea, Intel said, is to deliver “a custom logic continuum with reusable IPs through a migration path from FPGA to structured ASIC.”
Agilex incorporates Intel’s second generation HyperFlex Architecture, which the company said has up to 40 percent higher throughput compared with Intel Stratix 10 FPGAs and up to 40 teraflops of digital signal processor (DSP) performance. It supports the higher bandwidth of PCIe Gen 5 and supports up to 112 Gbps data rates, according to Intel. For memory, Agilex supports DDR5, HBM, Intel Optane DC persistent memory.
Beyond more processing power, Intel invested in Agilex’s ability to play well with others.
“Integration across the Intel portfolio is our focus,” said Patrick Dorsey, VP/GM product marketing, Intel Programmable Solutions Group, at the company’s launch event last week in San Francisco, noting that, because Agilex is the first FPGA developed completely by Intel, it’s integrated with Intel hardware and software development capabilities, providing “a common developer experience not only for FPGAs but for the rest of the Intel portfolio.”
The developer capabilities include One API, which Intel calls “a software-friendly heterogeneous programming environment designed to simplify the programming of diverse computing engines….” One API includes a “unified portfolio” of developer tools for mapping software to the hardware that can best accelerate the code. A public project release is expected to be available in 2019, Intel said.
“What I like about what Intel’s doing,” said Moorhead, “is they’re making One AI scale across CPUs, GPUs, fixed-function ASICs and FPGAs, so the programmer…would write the program based on the workload and…One API senses the most efficient processor in the system for the workload, (it) sends the work to the right processor. I refer to it as the ‘magic API,’ it’s the holy grail of accelerators because the thing that’s holding back these accelerators is they require a niche programmer.”
For interconnectivity, Agilex supports the recently announced Intel-led Compute Express Link (CXL), a cache- and memory-coherent interconnect built on the PCI Express infrastructure designed to deliver high-speed communications between the CPU and multiple accelerators, including FPGAs.