The future of high performance computing is now being defined both in how it will be achieved and in the ways in which it will impact diverse fields in science and technology, industry and commerce, and security and society. At this time there is great expectation but much uncertainty creating a climate of opportunity, challenge, and excitement. It is within this context of forging a future of computation in the crucible of innovation that we have been invited by HPCwire to host an ongoing series of informational articles tracking this trajectory to exascale computing and beyond. The answers are not yet established but the possibilities are currently emerging and the path or paths to be traversed towards these goals are only now coming into view.
It will be our pleasure over the ensuing months to guide this series of news articles, editorials, interviews, discussions, and perhaps some debates to provide and stimulate an open forum of consideration and dialog within these virtual pages and broad readership. We hope you will join us on this voyage of exploration as we illuminate and explore the rapidly evolving field of exascale computing towards the new frontiers of capability and discovery.
Even as we casually interject the term “exascale”, we as a community inadequately define or determine it’s meaning, at least in a specific and widely adopted way. Is it achieving: 1 Exaflops Rmax on the Linpack benchmark (HPL) or is it a thousand times the capability of current generation Petaflops class systems? Is it merely a single point on a many thousand-fold progression of performance (more than four so far in the lifetime of a single individual) or rather a trans-performance regime spanning a range of achievement across the three orders of magnitude from an exaflops to the ethereal heights approaching a Zetaflops; a rarely employed term even now. Is it even about flops (floating point operations per second)? In the age of “big data”, graph processing, and embedded and mobile computing it is apparent that floating-point operations are not the only important measure of performance. Integer operations, memory references, and data handling may be at least as important.
For systems of the future (even the biggest ones now) there is great concern about total energy usage and power demand. Although subjective, one asserted threshold of pain is anything beyond 20 Megawatts. Yet Tienhe-2 already surpasses that when cooling is included, and commercial data centers already consume more than 100MW. A 20 MW limit imposes an average energy cost of about 20 Pica-Joules per floating point operation where today’s most “green” systems achieve a few Gigaflops per Watt. A rule of thumb is that a Megawatt per year costs approximately $1M. This is only one of several factors that will challenge practical computing in the exascale performance regime and era. We will consider over the succeeding weeks and months a post-modern view of performance, productivity, and even other less familiar properties as portability and generality.
But even more important than the what is perhaps the why. How often does one hear the question — “do we really need exaflops?” Over the following months we will invite experts to document diverse compelling cases where exascale computing is not only useful but essential for critical breakthroughs. Such politically charged domains as climate change demand degrees of resolution in time, space, and phenomenology to clarify, refine, and ultimately determine the validity of models as well as their implication for anthropogenic CO2 contributions.
On the brighter side (perhaps literally) is the potential impact of exascale computing to the ultimate realization of such alternatives to fossil fueled energy sources as controlled fusion. Here computing may not only determine the feasible designs for this potentially ultimate source of power but also be critical to its real-time control to make it possible. More than powering civilization on Earth, the same technologies enable projecting advanced human civilization into the solar system and beyond by the end of the 21st Century. Composite materials, microbiology, medical diagnosis, design optimizations, and even machine intelligence may all yield to computing in the exascale era. These and other application domains will be explored and discussed throughout this series. Beyond justifying the creation of exascale platforms, such detailed discussions will help determine their design and operational properties and how to achieve them.
The challenge to realizing exascale computing is not just about putting together enough hardware, or worrying about getting the energy down, or the creation of a new parallel programming language, or the crafting of new algorithms and applications. It is all these things and more and they are all interrelated in important and nuanced ways. There may not even be a single solution but rather a number of different design points both because of various opportunities and ideas and also because there are differences in the usage profiles of the application workloads and their resource requirements. It is also about responsible progress sustaining not just for future application codes but for literally decades of legacy programs upon which there is heavy dependence for agency mission critical problems, basic and applied science, and industrial and commercial applications. This challenge of innovation and continuity is one of the great problems faced by the community and that will be discussed throughout this HPCwire series of articles on exascale computing.
Advances in device technology will be essential in enabling future computing opportunities but will also be challenging. Semiconductor feature size is expected to shrink to 5 nanometers by the end of this decade yielding perhaps a density growth of about an order of magnitude. Yet this also reflects the approaching end of Moore’s Law and even then the power consumption demands may limit the practical use of the full capabilities of chips and full systems. There is promising work in all of these areas and we explore these innovative approaches to the hardware needed for Exascale systems. Other factors that have confronted system design and usage in the past also challenge the future of exascale. These include parallelism, latency of local and global access, memory hierarchies, overheads for control, and contention for shared resources. We will explore these opportunities versus challenges tradeoffs and possible strategies to optimizing within the design space now and in the exascale future in the pages of this HPCwire series.
Exascale is not just about the very biggest computing systems, it is about extreme capabilities at many scales. Perhaps the most exciting promise of exascale is the ubiquitous availability of Petaflops capable computing in the next decade. A single rack with a power consumption of 50 Kilowatts will be able to deliver 1 Petaflops well within 10 years. Thus exascale technology, which may at full capability sit on the raised floors of national centers worldwide, will also put Petaflops in the hands of most scientists, academics, and industry product developers. These systems are likely to cost on the order of $250K, well within the budget of many user domains. They will serve as end computing platforms but also as the training grounds for those who will need more computing power to solve their biggest problems.
The motivation of this new publication series is to build a bridge between the general HPC community and the industrial, academic, and government experts who are dedicated to realizing this exciting dream of practical exascale computing. Over the next months, you will see invited articles, interviews, editorials, and news briefs that will lay out the path even as the journey has begun. We the editors will serve as guides through this complex and changing space of discovery. We solicit questions and comments from our readership to help improve the discourse and story. We are delighted to have the opportunity to serve in this capacity and thank HPCwire for their support and encouragement in doing so.