AI processors are reinvigorating the global semiconductor industry, prompting at least one market tracker to predict a three-fold increase in AI chip applications over the next five years.
In a survey of AI adoption released this week, IHS Markit predicts AI applications will explode to $128.9 billion by 2025, up from about $42.8 billion in 2019. The AI processor market will expand at a comparable rate, hitting $68.5 billion by the mid-2020s, IHS said.
The booming AI chip market is being fueled by emerging processor architectures for GPUs, FPGAs and ASICs used for deep learning and vector processing tasks. Moreover, leading applications like automotive, computing and healthcare are fueling a wave of new AI applications.
“AI is already propelling massive demand growth for microchips,” said Luca De Ambroggi, senior research director for AI at IHS Markit. “However, the technology also is changing the shape of the chip market, redefining traditional processor architectures and memory interfaces to suit new performance demands.”
Among these are the growing amounts of high-bandwidth volatile memory needed to process deep learning algorithms. Increasing memory bandwidth to handle AI models is driving processor power consumption to “unsustainable levels,” the analyst warned.
In response, De Ambroggi said, new processor architectures are emerging that seek to reduce data movement by moving memory closer to processor cores. That framework accelerates parallel processing “with dedicated memory cells for each processing core,” IHS noted.
Another approach shifts processing tasks directly into memory as a way of reducing data movement. The idea is to process data where it resides, thereby reducing power consumption and latency.
At least one chip startup has emerged over the last year with an architecture that incorporates embedded low-power MRAM on an ASIC. Gyrfalcon Technology Inc. said last year its “production-ready” ASIC leverages on-chip memory as an AI processor, reducing data movement among edge devices while accelerating the processing of AI models.
The AI processing-in-memory framework “optimizes the speed of processing, achieving high [theoretical operations per second] performance, while also saving tremendous amounts of power by avoiding management of data in discrete memory components,” the startup said.