In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance penalties on matrix operations, can end up costing both performance and memory space.
As reported in a paper at ISC19, researchers[i] recently rethought use of sparse matrix representations, originally motivated by GPU memory constraints, to use dense matrices in order to benefit from the larger memory capacities and scale-out capabilities of CPUs. The result was not only superior performance and scaling using CPUs, it also (perhaps surprisingly) included a reduction in memory footprint because of the interplay between using sparse representations to reduce memory and the increased memory usage due to algorithm inefficiencies.
The researchers demonstrated the positive effects of their work in Horovod – an open source distributed Deep Learning framework for TensorFlow created by Uber Engineering. They also demonstrated its outstanding ability to scale-out, proving it using supercomputers run with large numbers of CPUs. Their work has been incorporated into Horovod 0.15.2 and later, allowing anyone to benefit from their approach. The researchers encourage others to think as they have, because they believe that their rethinking of such work has applicability to other frameworks and libraries, such as BERT (Bidirectional Encoder Representations from Transformers).
The science – NMT
Neural machine translation (NMT) — using neural networks to translate human language — is an area of active research with the goal of dramatically improving machine translation performance. Current state-of-the-art approaches have hit roadblocks due to excessive memory use (a graph shared later in this article shows their scaling results on 8 nodes, proving how badly the original code fails to scale even at such low levels). Researchers made modifications to reduce memory usage for transformer models by converting assumed-sparse tensors to dense tensors, and subsequently replacing sparse gradient gather with dense gradient reduction. NMT now reaches new heights by leaning on CPU capabilities including superior memory capacity.
Being dense has its advantages
Dense Matrix representations consume more memory than sparse representations for many real-world matrices. As a result, many Deep Learning and AI algorithms err on the side of using sparse matrix representations to deal with the small local memories available when using GPUs. Unfortunately, while often saving memory they come with a non-trivial performance penalty, and coding complexity, for many matrix operations. This is markedly different than CPU programmers who tend to err on the side of using dense matrix representations because operations on them remain straightforward and simple to program and maintain.
Common wisdom questioned: GPUs like sparse, CPUs like dense
Originally, the researchers were looking to undo the performance degradations associated with sparse matrix representations — motivated by the GPU port of the code, and unnecessary for a CPU port of the code. The researcher suspected the matrices might not be as sparse as originally assumed (hence they emphasize “assumed sparse” in their discussions), and they knew the benefits on memory savings in such cases are diminished as they can be easily overwhelmed by the additional costs of matrix operations.
In the particular case they investigated, the distributed learning algorithm utilized an accumulation instead of a reduction operation because that is more practical when using sparse matrix representations. However, this approach dramatically contributes to increased memory utilization because it accumulated results instead of holding down the memory footprint of results through reductions. In this case, the interplay of algorithm choice and memory layout, combined with the denseness of these assumed sparse matrices, led to a benefit for both GPU and CPU in terms memory footprint — while unleashing the full potential of CPU based systems to scale-out with this simpler to understand algorithm (uncomplicated by the GPU inspired use of sparse matrices).
Unleashing CPU scaling
Once the researchers shifted to dense matrix representations, their new implementation opened the door for much improved scaling. What would take one month when using a single node, is now reduced to slightly over 6 hours when using 200 nodes (121 times faster). This result can significantly increase the productivity for NMT researchers by allowing the use of CPU-based HPC infrastructures. Researchers reported that their ability to maintain very high scaling efficiencies up to the 300-node level that they tested, suggests that continued scale-out is worthwhile beyond what they have tried thus far. That is certainly far better than the inability to scale beyond 8 nodes effectively when they started!
Even at only 8 nodes, the rapid decline in scaling of the original (sparse) approach dooms any high degree of scale-out — so runs at higher levels would be a waste of money and compute resources. The new approach (dense) shows enough promise here, that researchers later show exceptional scaling results above 256 nodes.
Results — faster execution and smaller memory footprint
Their code using a dense representation resulted in a more than 82x reduction (11446MB to 139MB) in the amount of memory required on 64-node run. It also, saw a more than 25x reduction in time required for the accumulation operation (4321ms to 169ms).
Space/time for tensor accumulated (sparse gather vs. dense reduce)
Model training experiments were run on the Zenith cluster in the Dell EMC HPC & AI Innovation Lab, as well as the Stampede2 cluster at the Texas Advanced Computing Center (TACC) in Austin, Texas, both featuring Intel processors and Intel Omni-Path fabric. In both cases, the researchers used Python 2.7, with Intel’s MKL-optimized version of TensorFlow (1.12), and modifications to Horovod that are available to everyone now in the versions 0.15.2 and later.
Each Zenith node consists of dual Intel Xeon Scalable Gold 6148/F processors, 192GB of memory, and an M.2 boot drive to house the operating system that does not provide user-accessible local storage. Nodes are interconnected by a 100Gbps Intel Omni-path fabric, and shared storage is provided by a combination of NFS (for HOME directories) and Lustre filesystems.
Work on the Stampede2, used the Skylake (SKX) partition, which consists of 1,736 nodes. Each node is outfitted with dual Intel Xeon Scalable Platinum 8160 processors, 192GB of memory, and 200GB internal SSD drive for the operating system and local /tmp. All nodes are interconnected with 100Gbps Intel Omni-Path fabric and connected to Lustre-based shared filesystems.
The researchers summarized their work in a paper at ISC19. The software changes which they discuss in their paper have been incorporated into Horovod 0.15.2 and later, providing other researchers the opportunity to apply their approach on any models that may benefit.
[i] Valeriu Codreanu and Damian Podareanu of SURFsara, Derya Cavdar, Can Karakus, and Victor Suthichai of Amazon, Alexander Sergeev of Uber, Vikram Saletore of Intel, and John A. Lockman III, Don D. Smith II, Quy Ta, Srinivas Varadharajan, Lucas A. Wilson, Rengan Xu, and Pei Yang of Dell EMC.
About the Author
James Reinders likes fast computers and the software tools to make them speedy. With over 30 years in High Performance Computing (HPC) and Parallel Computing including 27 Years at Intel Corporation (retired June 2016), he is also the author of nine books in the HPC field, numerous papers and blogs.