As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. AI models are shifting in complexity—and real-world deployments need not only training but inference.
Future proof your data center with modernization investments that address the diverse requirements of a broad range of analytics workloads, including AI. A modern infrastructure built on industry-standard hardware will also help maximize utilization to achieve your TCO objectives, and eliminate complexities introduced by new architectures.
With this in mind, HPE and Intel have partnered to help broaden the portfolio. Because we know this shift is occurring and the need for various choices and solutions is imminent.
Accelerate AI applications
Just as you consider which superhero costume you want wear on Halloween, one has to explore the right hardware for your deep learning training. You must consider how often you need to train, what type of data you have (structured, unstructured, type of image, voice, text, etc.), and how much time you can tolerate between each run.
For example, some accelerators can work well for tasks like image recognition and were once the only deep learning training acceleration option. However, for memory-intensive data (including massive amounts of unstructured data), sparse data, and annual training exercises, CPUs perform well.
When supported on HPE Apollo and HPE ProLiant family of servers, Intel® Xeon® Scalable processors are enhanced with substantial improvements in software optimizations and hardware instructions, more complex, hybrid applications can be accelerated, including larger, memory-intensive models. Here, deep learning applications can run alongside other applications on the same analytics infrastructure for higher overall utilization. On-premise and/or in the cloud, AI can be done well on the architecture you already know. Upgrading to Intel® Xeon® Scalable processors in your data center enables you to maximize utilization of existing, familiar infrastructure by running high-performance data center and AI applications side by side.
In addition to CPUs, HPE will be supporting Field Programmable Gate Arrays (FPGA). Complementary to CPUs, FPGAs allows specific workload acceleration (e.g. database acceleration, financial back-testing of trading algorithms, and Big Data process acceleration). . FPGAs are providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for reconfiguration on the fly to run different algorithms that can be changed in real-time. Be on the lookout for HPE’s announcement at SC’18 and the availability for the next gen Intel Arria FPGA and support on select HPE ProLiant Gen10 Servers.
Scale with a high speed interconnect
With trick or treating where the bigger the bag, the more candy you can carry. The same applies to the need to have a high-speed interconnect. It is critical to scale and push data to the servers, CPUs and FPGAs to crunch data for deep learning algorithms. When AI systems grow and scale up, as they often do, the fabric that stitches that system together must be able to grow seamlessly too—to maintain its speed, security, agility, versatility, and robustness throughout.
Intel® Omni-Path Architecture (Intel® OPA) is a high-speed interconnect developed originally for high-performance computing (HPC) clusters, whose efficiency and speed in this domain improves scalability and increases density, while reducing latency, cost, and power on the frontiers of AI as well. Moreover, clusters built with OPA can occupy a versatile niche in which they run HPC workloads during the day and compute-intensive deep learning training workloads at night.
By combining the HPE Apollo Systems, HPE SGI 8600 and HPE ProLiant Servers interface seamlessly with Intel OPA fabric, you now have an interconnect solution that spans entry-level clusters all the way through to supercomputers. With highly power efficient and price optimized solutions, HPE and Intel OPA can meet the needs of HPC or AI customers. Whether customers are seeking entry-level, rack scale systems that are air cooled or a high-end liquid cooled system, HPE has the HPC and AI solution for you—with Intel OPA optimized across its portfolio.
Open software, libraries, and tools to speed deployment and optimize performance
Frameworks and libraries are of the utmost importance in moving AI forward. Application developers need software tools that are easy to use, speed up the workflow, and come with ecosystem support that helps them through the rough patches. Intel’s Deep Learning Software Optimization MKL-DNN OpenSource Framework is optimized with the popular deep learning frameworks like TensorFlow* and MXNet* to consistently deliver more optimizations and performance as the software continues to evolve along with the AI landscape. Optimizations across hardware and software have dramatically extended the capabilities of Intel® Xeon® Scalable platforms for deep learning, already resulting in more than 240x performance gains for training and nearly 280x inference across many popular frameworks[i].
HPE has validated the Intel’s Deep Learning Software Optimization MKL-DNN (TensorFlow, Mxnet and OpenVino) OpenSource Framework on the HPE Apollo and ProLiant systems and can be downloaded directly from the Intel sites listed below:
HPE and Intel: Better together
No one wants the surprise of a yucky apple in their trick or treat bag! That is why HPE and Intel have partnered together to help remove the surprises out of your next AI project with the release of new Intel CPU-based AI Inference Bundles from HPE[ii]. These AI solutions are based on HPE ProLiant DL360 Gen10 compute platforms, Intel® Xeon® Scalable processors, Intel OPA® fabric along with optional downloadable Intel’s Deep Learning Software Optimization MKL-DNN (TensorFlow, Mxnet & OpenVino) OpenSource Frameworks.
So whether are you just getting started or are looking to scale HPE and Intel can help ensure your AI project is not bewitched. Let us take the fear out of your next AI project! We have something that will fit your needs.
For additional info, please reach out to your HPC/AI specialist or contact [email protected] and someone will be in touch with you.