Microsoft is jumping on the artificial intelligence bandwagon with the formation of a new research group that will seek to make the technology more accessible via its Azure cloud while helping to deliver new capabilities across applications, services and infrastructure.
The infrastructure portion of the effort focuses on combining the processing engines like GPUs and FPGAs designed to improve network connectivity as ways to boost AI performance running on Microsoft’s Azure Cloud.
Microsoft said last week Harry Shum, a 20-year company veteran who worked on the Bing search and Cortana intelligence personal assistant projects, would head the AI initiative. More than 5,000 computer scientists and engineers work for Microsoft’s AI and Research Group.
Microsoft’s AI initiative seeks to “democratize” AI technology through a focus on agents, applications, services and infrastructure. Infrastructure goals include building what the company asserts would be the world’s most powerful “AI supercomputer” integrated with its Azure Cloud that would help extend access to more users.
The software giant and cloud competitor said the AI supercomputer would combine FPGA and GPU silicon with the Azure cloud as Microsoft researchers look “post-Moore’s Law—and solving the issue of Moore’s Law running out of steam.”
The emerging cloud platform would use an “FPGA fabric” tied to GPU processing to speed applications like machine translation and Bing search queries. “Azure is using this technology for accelerated networking and for a virtual machine that can drive 25 gigabits per second throughput at a 10X reduction in latency,” the company claimed. “Every time you do a Bing query, you’re touching this fabric to get better results.”
That approach would help drive emerging AI services that Microsoft said would require new approaches like combining and FPGA fabric with Azure, a cloud architecture that would allow the AI platform to “talk to the network directly.”
Meanwhile, AI researchers are adding GPUs to the cloud mix as a way to achieve scale and boost processing performance.
The company added that customers are currently running CPU-based virtual machines on the cloud architecture to scale production workloads. The addition of FPGAs is credited with boosting network performance in the cloud to boost throughput for many workloads.
The AI initiative led by Shum, executive vice president of the Microsoft AI and Research Group, reflects efforts to expand the 25-year-old Microsoft Research unit to develop disruptive technologies. In the case of infrastructure, the effort looks to combine emerging processing architectures centered on GPUs and FPGAs to create more use cases for the Azure Cloud, which remains a distant second in the public cloud race to market leader Amazon Web Services.
Along with the cloud infrastructure push, Microsoft’s AI effort also includes a group focusing on furthering Bing search and Cortana development along with robotics and what the company referred to as “ambient computing.” Meanwhile, AI services efforts will focus on cognitive capabilities ranging from vision and speech to machine analytics while making those services more widely available to developers.