ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking system for deep learning, [and] provides software infrastructure to utilize the most powerful supercomputers for extreme-scale workloads.” The researchers used CSCS Piz Daint supercomputer in developing the benchmark, have made the code freely available on GitHub, and last week published a detailed analysis of their approach (A Modular Benchmarking Infrastructure for High-Performance and Reproducible Deep Learning)[i].
“Deep500 [is] the first customizable bench- marking infrastructure that enables fair comparison of the plethora of deep learning frameworks, algorithms, libraries, and techniques,” write the researchers. “The key idea behind Deep500 is its modular design, where deep learning is factorized into four distinct levels: operators, network processing, training, and distributed training. Our evaluation illustrates that Deep500 is customizable (enables combining and benchmarking different deep learning codes) and fair (uses carefully selected metrics). Moreover, Deep500 is fast (incurs negligible overheads), verifiable (offers infrastructure to analyze correctness), and reproducible.”
The paper is fascinating not only for it hands-on analysis of DL benchmarking challenges and how-to-use Deep500 elements but also for its comparison of Deep500 with existing benchmarks such as MLPerf. Posting the work fulfills a promise made by ETH researchers Tal Ben-Nun and Torsten Hoefler at SC18 at the Deep500 BOF (see HPCwire article, The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning). Presumably the next step will be actively soliciting feedback from the community and enticing users to try out the new tool set.
Ben-Nun and Hoefler told HPCwire in an email today, “We developed the modular benchmarking approach as a basis for a reproducible measurement infrastructure. It will be used to establish the competition on various levels. Our main focus now is looking for scientific problems to train for the competition, and any input from the community is welcome. You can contact us at [email protected]”
Among other things, that sounds like plans for a Deep500 list (à la Top500) are firming up; one wonders when, SC19 perhaps?
Given the rapid adoption of DL in HPC, efforts to create reliable, meaningful DL benchmarking tools have been ratcheting up. Deep500 is the only system, say the authors, that focuses on performance, accuracy, and convergence, while simultaneously offering a wide spectrum of metrics and criteria for benchmarking, enabling customizability of design, and considering a diversity of workloads (benchmark comparison table below, click to enlarge).
Ben-Nun and colleagues do a nice job capturing the challenge of attempting to build reasonable DL benchmarking tools.
Excerpt: “Recent years saw an unprecedented growth in the number of approaches, schemes, algorithms, applications, platforms, and frameworks for DL. First, DL computations can aim at inference or training. Second, hardware platforms can vary significantly, including CPUs, GPUs, or FPGAs. Third, operators can be computed using different methods, e.g., im2col or Winograd in convolutions. Next, DL functionalities have been deployed in a variety of frameworks, such as TensorFlow or Caffe. These functionalities may incorporate many parallel and distributed optimizations, such as data, model, and pipeline parallelism. Finally, DL workloads are executed in wildly varying environments, such as mobile phones, multi-GPU clusters, or large-scale supercomputers.”
No single metric, for example, is adequate note the researchers: “On one hand, some metrics may simply be too detailed, for example the number of cache misses in 2D convolution implemented in TensorFlow or Caffe2. Due to the sheer complexity of such frameworks, this metric would probably not provide useful insights in potential performance regressions. On the other hand, other metrics may be too generic, for example simple runtime does not offer any meaningful details and does not relate to accuracy. Thus, one must select metrics that find the right balance between accuracy and genericness. In Deep500, we offer carefully selected metrics, considering performance, correctness, and convergence in shared- as well as distributed-memory environments.”
- “Customizability indicates that Deep500 enables benchmarking of arbitrary combinations of DL elements, such as various frameworks running on different platforms, and executing custom algorithms. To achieve this, we design Deep500 to be a meta-framework that can be straightforwardly extended to benchmark any DL code. Table I illustrates how various DL frameworks, libraries, and frontends can be integrated in Deep500 to enable easier and faster DL programming.
- “Metrics indicates that Deep500 embraces a complex nature of DL that, unlike benchmarks such as Top500, makes a single number such as FLOPS an insufficient measure. To this end, we propose metrics that consider the accuracy-related aspects of DL (e.g., time required to ensure a specific test-set accuracy) and performance-related issues (e.g., communication volume).
- “Performance means that Deep500 is the first DL benchmarking infrastructure that can be integrated with parallel and distributed DL codes.
- “Validation indicates that Deep500 provides infrastructure to ensure correctness of aspects such as convergence.
- “Reproducibility as specified in recent HPC initiatives[ii]to help developing reproducible DL codes.”
The core enabler in Deep500, write the researchers, is the modular design that groups all the required functionalities into four levels: 1 Operators; 2 Network Processing; 3 Training; and 4 Distributed Training. Each level provides relevant abstractions, interfaces, reference implementations, validation procedures, and metrics. “We illustrate levels and their relationships in Fig. 1 (shown higher in article) and the full design of the Deep500 meta-framework is shown in Fig. 3 (an eye test for sure but worth examining, click to enlarge).”
The researchers emphasize that, “The Deep500 meta-framework is a benchmarking environment, and as such it is not meant to be a DL framework that provides optimized implementations of its own. Rather, Deep500 assumes high-performance frameworks exist. By abstracting the high-level aspects of DL (e.g., data loading) in a platform-agnostic manner, Deep500 enables the measurement and development of various metrics (performance, accuracy) in the different contexts of DL and distributed DL.
“By taking the white-box approach, the user roles that Deep500 enables can be of a benchmark evaluator, or of an experimental scientist. In the former, one might use Deep500 and the various built-in metrics to choose hardware (or soft- ware) that performs best given a target workload. The latter role can use metrics and automatic integration with existing frameworks in order to empirically evaluate new operators, training algorithms, or communication schemes for DL. Since Deep500 provides reference code for nearly every concept, new methods can be validated against existing verified (yet slow) implementations.”
Deep500 is the only system that focuses on performance, accuracy, and convergence, while simultaneously offering a wide spectrum of metrics and criteria for benchmarking, contend the authors. I will be interesting to monitor quickly the new benchmark gets tested. There is a fair amount of detail in the paper which is nevertheless a reasonably quick read and good resource.
[i]A Modular Benchmarking Infrastructure for High-Performance and Reproducible Deep Learning, Tal Ben-Nun, Maciej Besta, Simon Huber, Alexandros Nikolaos Ziogas, Daniel Peter, Torsten Hoefler, https://arxiv.org/pdf/1901.10183.pdf