Avoid AI Redo’s by Starting with the Right Infrastructure

By Addie Anderson, IBM Cognitive Infrastructure

June 18, 2019

Do you know if you have the right infrastructure for AI? Many organizations don’t have it. In a recent IDC survey, “77.1% of respondents say they ran into one or more limitations with their AI infrastructure on-premise and 90.3% ran into compute limitations in the cloud.” According to IDC, AI efforts risk failure due to limitations from choosing the wrong infrastructure. In fact, IDC “has seen businesses completely overhaul their infrastructure, sometimes twice in just a few years” due to such limitations.

Many organizations have moved beyond the ‘AI hype’ and understand the value of AI to unlock insights and drive better business decisions. But many of those same businesses don’t know where to start. And when they do start, their AI experiments hit infrastructure limitations and fail. So how can organizations adopt and scale AI without failure?

That can be a challenging question to answer; AI comes with a lot of questions commonly about infrastructure requirements.

Perhaps you’re one of the few who have made it over the initial AI hurdle. You’ve successfully completed a few AI projects, have been able to demonstrate the uses cases and business impact, and now your line of business has tasked you with scaling and operationalizing AI. That’s a lot of pressure on your shoulders. Where do you start?

 

[Read more about accelerating the power of AI.]

 

Responding to Limitations

Per IDC, the top three limitations encountered with on-premise AI server infrastructure are difficulty managing, difficulty scaling and performance bottlenecks. For cloud deployments the top limitations are difficulty scaling, performance bottlenecks and insufficient storage.

IDC commonly sees organizations moving to greater processor performance and I/O bandwidth, along with the addition of accelerators.

This is because for AI, processor performance is key. IDC says greater performance means faster and more accurate results. But with the end of Moore’s Law, increasing CPU performance is getting harder to achieve. IDC sees the “fundamental reason for the limitations that businesses encounter as ‘core starvation'”. A big gap exists between the actual and needed number of cores, and organizations are turning to accelerators to fill the gap. In fact, IDC expects the use of accelerators to boost AI workload performance and to become a permanent aspect of computing.

 

[Learn how to avoid squandering your most valuable AI resource.]

 

Do you have the purpose-built infrastructure with “strong processors, powerful co-processors, fast interconnects, large I/O bandwidth, and plenty of memory” that deep learning demands?

Focusing on infrastructure can help you avoid the pitfalls of AI. Purpose-built infrastructure, such as the IBM Power System AC922 delivers the CPU processor performance, the GPU acceleration and the I/O bandwidth you need to scale your AI initiatives.

To help you get your AI initiatives off to the right start, IBM has sponsored an IDC White Paper, sponsored by IBM, “Ready for Scaling Your AI Initiative? Don’t Get Core Starved”. Read the analyst white paper to learn how you can avoid AI re-dos when you start with the right infrastructure.

 

Return to Solution Channel Homepage
HPCwire