FRANKFURT, Germany, June 19, 2019 — On June 17, at the 2019 International Supercomputing Conference (ISC19) in Frankfurt, Inspur announced its breakthrough AI and HPC appliance, based on a technology collaboration with Intel, to provide high-performance computing (HPC) and artificial intelligence (AI) users worldwide with flexible, efficient, and easy-to-use converged infrastructure.
Recently, AI has become a new application for HPC. Technologies such as multi-machine parallel, high-speed low-latency networks, and scheduling algorithms in the HPC field can greatly reduce the burden of managing and using AI clusters. However, due to the differences between AI and HPC in workload, programming model and development, how to integrate and utilize resources has become a common challenge for AI HPC users. Inspur believes that only by optimizing computing performance, scalable platform architecture, and system design, that they can effectively solve the problems brought by AI and HPC convergence.
The AI and HPC appliance released by Inspur integrates the latest high-performance computing technologies from Intel and optimized software. With its containerized software stack and flexible node design, it can efficiently support different workloads of AI and HPC on one computing platform, accelerating AI and HPC R&D and application innovation.
Inspur’s AI and HPC appliance employs the i48 multi-node computing platform as its compute nodes, and supports 16 of the latest 2nd Generation Intel® Xeon® Scalable processors and Intel® Omni-Path architecture in 4U. With built-in high-efficiency, Intel® Advanced Vector Extension 512 (Intel® AVX-512) and Intel® Deep Learning Boost instruction sets from Intel, it can provide hybrid deployment solutions for adapting to the requirements of different workloads for computing, network, and storage. 2nd Generation Intel Xeon Scalable processor features Intel Deep Learning Boost (Intel DL Boost) technology, which delivers up to 14% higher AI performance improvement compared to the previous generation Intel Xeon Scalable processor.
In terms of software stack, Inspur’s AI and HPC appliance integrates Inspur’s AI Station artificial intelligence development platform and Teye application feature analysis tool, which provides one-stop solutions covering data processing, model development, model training, resource scheduling, and can realize the unified management of computing resources, scheduling, monitoring, improve computing efficiency and help researchers and data scientists develop and train deep learning models. At the same time, Intel has also optimized a number of deep learning frameworks. For example, Python-based Optimization for TensorFlow enhances the ease of use and scalability of modern deep neural networks, Optimization for Caffe is one of the most popular image recognition frameworks and Intel® Math Kernel Library provides built-in support for the MXNet deep learning framework.
“Currently, the integration of artificial intelligence and high-performance computing is redefining the IT infrastructure,” said Peter Peng, Vice President of Inspur Group. “The software-defined converged infrastructure with reconfigured hardware will become one of the most important computing paradigms in the future. With the innovative AI and HPC appliance, Inspur is delivering a flexible and efficient unified computing platform for high-performance computing and artificial intelligence to users worldwide, and allowing them to achieve flexible switching between scientific computing and artificial intelligence computing, which are two different but closely related workloads.”
“The convergence of traditional HPC and AI represents a massive paradigm change in the field of computing,” said Rajeeb Hazra, corporate vice president and general manager of the Enterprise and Government Group at Intel. “Working with innovators like Inspur, we will equip scientists and researchers with the tools they need to take on the world’s greatest computing challenges.”