HPE today announced the latest rev of its HPE Apollo 6500 platform, Gen10, along with a spate of new AI-oriented offerings designed to help customers optimize and scale up their AI and deep learning usage.
Like is Gen9 predecessor, HPE’s Apollo 6500 platform with the XL270d Gen10 server supports up to eight Pascal or Volta Nvidia GPUs — P40s, P100s and V100s (added last fall), but the new server enables NVLink-optimized configurations, and it brings in Skylake with options for 6100 and 8100-series Intel Xeon processors, up to 28 cores. For a fully-loaded server, the mezzanine-type V100s alone will get you to 62 peak double-precision teraflops, and accounting for the top-bin Intel 8180 SKUs pushes the max theoretical output to 66 teraflops. HPE reports the revamped offering delivers a 3x speedup for deep learning model training over previous-gen (P100-equipped) gear, based on a self-reported benchmark employing Caffe and TensorFlow and the models inception3, resnet50 and vgg16.
The Apollo 6500 Gen10 platform targets deep learning workloads and traditional HPC use cases involving complex simulation and modeling. HPE reports “an innovative systems design” that “allows for a high degree of flexibility with a range of configuration and topology options to match each workload.”
The incorporation of Nvidia’s NVLink 2.0 technology enables communication between GPUs up to 10x faster than the traditional PCIe Gen3 interconnect. Servers include four high-speed fabric adapters (Ethernet, Intel Omni-Path Architecture, InfiniBand EDR, and future InfiniBand HDR). HPE and its channel partners will begin shipping the new servers in May, same with WekaIO Matrix.
HPE’s AI-themed announcement is aimed at corporations struggling to implement ML/DL solutions. According to a recent Gartner report, only 4 percent of enterprises are adopting machine intelligence on a large scale. “Customers are increasingly realizing that the bottleneck is not in the model, not in the algorithms, not in the GPUs, the bottleneck is in what comes before it and what goes after it,” says Pankaj Goyal, vice president, Hybrid IT Strategy and AI, HPE. To help bridge the gap between the leaders and rest of enterprise market, HPE is launching an AI solution portfolio, which in addition to the Apollo 6500 includes:
- HPE Digital Prescriptive Maintenance, an industry solution that automates problem prevention and increases productivity of industrial equipment. The service is available in Europe today and set for worldwide availability this summer.
- HPE Artificial Intelligence Transformation Workshop, a one-day workshop to help customers explore and prioritize their AI use cases. Available now.
- HPE has entered into an agreement with flash-storage provider WekaIO to resell its file storage software WekaIO MATRIX, complementing HPE’s Lustre-based storage solutions.
- Expansion of the HPE Deep Learning Cookbook, launched last year, which now includes the HPE Deep Learning Performance Guide combining measurements with analytical workload performance models to recommend optimal hardware/software stack for particular workloads.
“Customers pursuing deep learning projects face a variety of challenges including a lack of mature use case and technology capabilities that can compromise time to value, performance and efficiency,” said Steve Conway, senior vice president, Hyperion Research. “HPE’s domain expertise, services, technologies and engineering ties to ecosystem partners promise to play an important role in driving AI adoption into enterprises in the next few years.”
“Making AI real for a broad range of applications, deep learning relies on high-performance computing to identify patterns and relationships within massive amounts of data – however, traditional high-performance systems are unable to keep pace with these requirements,” said Goyal in a prepared statement. “The HPE Apollo 6500 Gen10 System is purpose-built to enable organizations of all sizes to realize the benefits of deep learning faster than ever before. And with WekaIO’s flash-optimized parallel file system HPE now provides the required throughput for compute-intensive low-latency workloads.”
HPE will demonstrate its AI and HPC technologies at Nvidia’s GPU Technology Conference, March 26 to 29 in San Jose, Calif. It is will join with The Economist to host an AI event on Thursday, March 22, in Chicago, featuring leading thinkers and practitioners in the space.