Even though it seems simple now, there were a lot of skills to master in learning to ride a bike. From balancing on two wheels, and steering in a straight line, to going around corners and stopping before running over the dog, it took lots of practice to master these skills. If you were lucky enough to have training wheels, you could safely learn without falling and then ride confidently when they came off.
Organizations embarking on Artificial Intelligence (AI) projects also start with training wheels, and then progress through three stages on their journey to enterprise-scale AI. In the first stage, individual data scientists experiment and ‘practice’ on proof of concept projects. Fairly quickly, these PoCs will hit knowledge, data management and infrastructure performance obstacles that keep them from proceeding to the next stage – stabilization and production. In this stage multiple data scientists produce optimized and trained models quickly enough to deliver value to the organization. Moving to the third and final stage of AI adoption, where AI is integrated across multiple lines of business and requires enterprise-scale infrastructure, presents significant integration, security and support challenges.
Wells Fargo is a company that has successfully navigated the new world of AI as they use deep learning models to comply with a critical financial validation process. Their data scientists build, enhance, and validate hundreds of models each day and speed is critical, as well as scalability, as they deal with greater amounts of data and more complicated models. As Richard Liu, Quantitative Analytics manager at Wells Fargo said at IBM Think, “Academically, people talk about fancy algorithms. But in real life, how efficiently the models run in distributed environments is critical.” Wells Fargo uses IBM AI Enterprise software platform for the speed and resource scheduling and management functionality it provides. “IBM is a very good partner and we are very pleased with their solution,” added Liu.
When a large Canadian financial institution wanted to build an AI Center of Competency for 35 data scientists to help identify fraud, minimize risk, and increase customer satisfaction, they turned to IBM. By deploying the IBM Systems AI Infrastructure Reference Architecture, they now provide distributed deep learning as a service designed to enable easy-to-deploy, unique environments for each data scientist across shared resources.
When training wheels are not enough
Unlike riding a bike, moving from AI practice (PoC) to stabilization and production is not just a matter of taking off the training wheels. It requires a whole new set of skills and infrastructure to propel the organization up the AI hills and navigate AI hazards. Unfortunately, few people have the knowledge and experience needed to ride on their own without training wheels.
To help fill this knowledge and skills gap, IBM has built PowerAI Enterprise – an easy-to-use, integrated set of tools to get AI open source frameworks up and running quickly and accelerate AI adoption across an organization. These tools utilize cognitive algorithms and automation to dramatically increase the productivity of data scientists throughout the AI workflow.
Ritu Joyti, Vice President of IDC’s Cloud IaaS, Enterprise Storage and Server analyst, noted, “IBM has one of the most comprehensive AI solution stacks that includes tools and software for all the critical personas of AI deployments including the data scientists. Their solution helps reduce the complexity of AI deployments and help organizations improve productivity and efficiency, lower acquisition and support costs, and accelerate adoption of AI.”
PowerAI Enterprise is part of IBM’s tested, validated and optimized on-premises AI infrastructure reference architecture designed to help organizations jump-start AI and deep learning projects, and remove the obstacles to moving from experimentation to production and ultimately to enterprise-scale AI.
Get started quickly
PowerAI Enterprise shortcuts the time to get up and running with an AI environment that supports the data scientist from data ingest and preparation, through training and optimization and finally to testing and inference. Included are fully compiled and ready-to-use IBM-optimized versions of popular open source deep learning frameworks (including TensorFlow and IBM Caffe), as well as a software framework designed to support distributed deep learning and scale to 100 and 1000 of nodes. The whole solution comes with support from IBM, including the open source frameworks.
With IBM PowerAI Enterprise and the IBM Systems AI Infrastructure Reference Architecture, data scientists can confidently take off the AI training wheels, with less focus on the infrastructure mechanics and more on the AI journey and destination.
Learn more about the IBM Systems AI Infrastructure Reference Architecture and IDC’s review of the architecture here.