If you’ve wondered about what, exactly, NCSA supercomputer Blue Waters has been doing since being fired up in 2013, a new report is full of details around workloads, CPU/GPU use patterns, memory and I/O issues, and a plethora of other metrics. Released in March, the study – Final Report: Workload Analysis of Blue Waters – provides a wealth of information around demand and performance. Blue Waters has supplied roughly 17.3 billion core hours to scientists to date.
“When the system was originally configured, it was not clear what balance of CPU or GPU should be in the system. We set the ratio based on analysis of the science teams approved to use Blue Waters and consultation with accelerated computing experts,” said Greg Bauer, applications technical program manager at NCSA. “The workload study shows the balance we went with is very reasonable, and that we were ready to keep up with the demand for the first three years.”
Blue Waters, of course, is the Cray XE6/XK7 supercomputer at the National Center for Supercomputing Applications (NCSA). It’s a formidable 13 petaflops (peak) machine with two types of nodes connected via a single Cray Gemini High Speed Network in a large-scale 3D Torus topology. The two different types of nodes are XE6 (AMD 6276 Interlagos processors) and XK7 (AMD 62767 plus Nvidia Kepler K20X GPUs). The NCSA supercomputer employs a high performance on-line storage system with over 25 PB of usable storage (36 PB raw) and over 1 TB/s sustained performance.
As noted in the report, “The workload analysis itself was a challenging computational problem – requiring more than 35,000 node hours (over 1.1 million core hours) on Blue Waters to analyze roughly 95 TB of input data from over 4.5M jobs that ran on Blue Waters during the period of our analysis (April 1, 2013 – September 30, 2016) that spans the beginning to Full Service Operations for Blue Waters to the recent past. In the process, approximately 250 TB of data across 100M files was generated. This data was subsequently entered into MongoDB and a MySQL data warehouse to allow rapid searching, analysis and display in Open XDMoD. A workflow pipeline was established so that data from all future Blue Waters jobs will be automatically ingested into the Open XDMoD data warehouse, making future analyses much easier.”
The report is a rich and also dense read. Here are a few highlights:
- The National Science Foundation MPS (Math and Physical Sciences) and Biological Sciences directorates are the leading consumers of node hours, typically accounting for more than 2/3 of all node hours used.
- The number of fields of science represented in the Blue Waters portfolio has increased in each year of its operation – more than doubling since its first year of operation, providing further evidence of the growing diversity of its research base.
- The applications run on Blue Waters represent an increasingly diverse mix of disciplines, ranging from broad use of community codes to more specific scientific sub-disciplines.
- The top 10 applications consume about 2/3 of all node hours, with the top 5 (NAMD, CHROMA, MILC, AMBER, and CACTUS) consuming about 50%.
- Common algorithms, as characterized by Colella’s original seven dwarfs, are roughly equally represented within the applications run on Blue Waters aside from unstructured grids and Monte Carlo methods, which exhibit a much smaller fraction.
The pie chart below depicts the current Blue Waters workload (5/2/17).
One of many interesting questions examined is how use of the different node types varied. Here’s an excerpt:
For XE node jobs, all of the major science areas (> 1 million node hours) run a mix of job sizes and all have very large jobs (> 4096 nodes). The relative proportions of job size vary between different parent science areas. The job size distribution weighted by node hours consumed peaks at 1025 – 2048 for XE jobs. The largest 3% of the jobs (by node hours) account for 90% of the total node-hours consumed.
The majority of XE node hours on the machine are spent running parallel jobs that use some form of message passing for inter-process communication. At least 25% of the workload uses some form of threading, however the larger jobs (> 4096 nodes) mostly use message passing with no threading. There is no obvious trend in the variation of thread usage over time, however, thread usage information is only available for a short time period.
For the XK (GPU) nodes, the parent sciences Molecular Biosciences, Chemistry and Physics are the largest users with NAMD and AMBER the two most prevalent applications. The job size distribution weighted by node hours consumed peaks at 65 – 128 nodes for the XK jobs. Similarly to the XE nodes, the largest 7% of the jobs (by node-hour) account for 90% of the node-hours consumed on the XK nodes.
The aggregate GPU utilization (efficiency) varies significantly by application, with MELD achieving over 90% utilization and GROMACS, NAMD, and MILC averaging less than 30% GPU utilization. However, for each of the applications, the GPU utilization can vary significantly from job to job.
Blue Waters has enabled groundbreaking research in many areas. One of the projects in the area where no other supercomputer would work was a project led by Carnegie Mellon University astronomer Tiziana Di Matteo. While it wasn’t her first simulation on a leadership class supercomputer, it was her most detailed, allowing her to see the first quasars in her simulation of the early universe.
“The Blue Waters project,” DiMatteo wrote in a Blue Waters report, “made possible this qualitative advance, making possible what is arguably the first complete simulation (at least in terms of the hydrodynamics and gravitational physics) of the creation of the first galaxies and large-scale structures in the universe.”
For those wishing a still substantive but less dense look at Blue Waters, NCSA released the 2016 Blue Waters annual report today.
Link to Blue Water report: https://arxiv.org/ftp/arxiv/papers/1703/1703.00924.pdf
Link to Blue Waters 2016 annual report: https://bluewaters.ncsa.illinois.edu/portal_data_src/BW_AR_16_linked.pdf