Three Chinese infrastructure vendors are embracing FPGA technology as a way of accelerating datacenter workloads.
FPGA specialist Xilinx Inc. announced during a developer forum in Beijing this week that Alibaba Cloud, Huawei and server vendor Inspur are rolling out datacenter platforms based on the chip maker’s FPGA-as-a-service model. Among the datacenter workloads being targeted is AI inference, the partners said Tuesday (Oct. 16).
Separately, Xilinx announced a partnership with Amazon Web Services to begin previewing FPGA instances in its Chinese regional hub in Beijing.
Xilinx also used the developer conference to showcase its Versal “adaptive computer acceleration platform” for AI inference along with a datacenter and AI accelerator card dubbed Alveo. The card targets commodity servers running machine learning and data analytics workloads in the cloud and on-premise datacenters.
Huawei said it is integrating the Alveo accelerator card into its current datacenter offerings with the goal of developing an ecosystem for harmonizing on- and offline platforms.
Inspur, the largest server vendor in, China, the fastest-growing server market, said it is qualifying a pair of Alveo accelerator cards for its general purpose and AI servers as well as its GX4 supercomputer “extension box.”
The Xilinx push into datacenters illustrates how it and other FPGA makers such as Intel’s Altera unit are moving to compete with CPU and GPU makers for accelerating machine learning and other data-intensive workloads.
For example, Xilinx partner Alibaba announced a partnership with Intel last year to incorporate Xeon-based servers and software development tools built around Intel’s Arria 10 GX FPGAs into its Aliyun cloud service. The pilot program aims to accelerate cloud-based application performance and provide Aliyun customers with an alternative cloud platform for running business applications along with demanding data and scientific workloads.
Meanwhile, Xilinx continues to extend its partnership with public cloud giant AWS where the number of regions FPGA instances has expanded from a single U.S. region last year to eight international regions. Along with Beijing, F1 instances are currently be previewed in Frankfurt, Germany, London and Sydney, Australia. F1 refers to a class of cloud instances based on Xilinx FPGAs used to accelerate AI inference, data analytics, video and image processing and other datacenter workloads.
AWS said it is offering 36 FPGA-accelerated applications and libraries in its F1 regions. Among the tools for software developers are OpenCL, C and C++ to build customer accelerators, and FPGA accelerators for hardware developers using Verilog and VHDL, the partners said.
This article originally appeared in sister publication EnterpriseTech.