Dark silicon refers to the processing potential that’s lost when thermal constraints disallow full CPU utilization. The gap between transistor scaling and voltage scaling combined with tighter integration of components (multicore, SoCs) has power density ramifications that are of particular concern for embedded computing, but high-performance computing faces similar “dark power” challenges. Bringing attention to this issue and exploring common solutions was the goal of the Dagstuhl Seminar 16052, “Dark Silicon: From Embedded to HPC Systems.”
A report of the same name looks at the unique and shared challenges facing both communities and provides an overview of the topics covered by the individual speakers in the seminar. Proposed solutions focus on “flexible thermal/power/resource management techniques both for runtime, design time as well as hybrid solutions.”
In the executive summary, authors Hans Michael Gerndt (TU München), Michael Glaß (FAU Erlangen), Sri Parameswaran (University of New South Wales), and Barry L. Rountree (Lawrence Livermore) assert that with future technology nodes, it will be “infeasible to operate all on-chip components at full performance at the same time due to the thermal constraints (peak temperature, spatial and temporal thermal gradients etc.).”
The situation is not as restricted for high-performance computing since heat is removed with a variety of cooling techniques, but with 20MW-class systems not far off, power grid limitations and energy costs pose serious concerns. The report notes that the five year energy costs of today’s largest systems are roughly equivalent to the purchase price. At approximately $1 million per year per MW of load, the power challenge is both a technological and budgetary concern. We’ve all heard about supercomputers that sit idle due to lack of funds or disruptions in power supply.
Today’s biggest machines already require careful power management. As the authors note, HPC facilities have contracts with the energy companies specifying usage, and going below or above these levels imposes hefty overages. Large swings in power demand, for example dropping load suddenly from 20 MW to 10 MW, pose another liability, not only disrupting workloads, but endangering power grid operations. With these challenges, optimized power distribution is essential.
For many of the fastest supercomputers, the only time the machine exercises its full power capacity is during setup and benchmarking.
Write the authors:
“During burn-in (and perhaps while getting a result to go onto the top-500 list) the machine will run dozens or hundreds of instances of Linpack. This code is quite simple and often hand-optimized, resulting in an unusually well-balanced execution that manages to keep vector units, cache lines and DRAM busy simultaneously. The percent of allocated power often reaches 95% or greater, with one instance in recent memory exceeding 100% and blowing circuit breakers. After these initial runs, however, the mission-critical simulation codes begin to execute and they rarely exceed 60% of allocated power. The remaining 40% of electrical capacity is dark: just as unused and just as inaccessible as dark silicon.
“While we would like to increase the power consumption (and thus performance) of these simulation codes, a more realistic solution in the exascale timeframe is hardware overprovisioning. This solution requires buying more compute resources than can be executed at maximum power draw simultaneously. For example, if most codes are expected to use 50% of allocated power, the optimal cluster would have twice as many nodes.
“Making this a feasible design requires management of power as a first-class resource at the level of the scheduler, the run-time system, and on individual nodes. Hardware power capping must be present. Given this, we can theoretically move power within and across jobs, using all allocated power to maximize throughput.”
The report concludes with a list of 11 takeaways (reproduced below) gleaned from working group panels and participant discussion:
1. Dark silicon is a thermal problem in embedded and a power problem in HPC. HPC can cool down while in the embedded world you can’t. Therefore HPC can power up everything if they have enough power. But the costs for providing enough power for rare use cases have to be rectified.
2. Better tools are required on both sides to understand and optimize applications.
3. Better support for optimizations is required through the whole stack from high level languages down to the hardware.
4. In both communities run-time systems will get more important. Applications will have to be written in a way that run-time systems can work effectively.
5. Task migration is of interest to both groups in combination with appropriate run-time management techniques.
6. Embedded also looks at specialized hardware designs while HPC has to use COTS. In HPC, the machine architecture might be tailored towards the application areas. Centers are specialized for certain customers.
7. Heterogeneity on architecture level is important to both groups for energy reduction.
8. Better analyzable programming models are required, providing composable performance models.
9. HPC will have to live with variability. The whole tuning step has to change since reproducibility will no longer be given.
10. Hardware-software co-design will get more important for both groups.
11. Both areas will see accelerator-rich architectures. Some silicon has to be switched off anyway, thus these can be accelerators that might not be useful for the current applications.
The 21-page report should be read in its entirety. The executive summary contains a section on hybrid (design-time and run-time) approaches, and the collection of 22 presentation abstracts showcases a wide range of efforts focused on dark silicon and related power challenges.