My comments during a panel discussion at SC07 that things were out of balance in the HPC “ecosystem” has recently drawn criticism to the contrary from Michael Wolfe (http://www.hpcwire.com/hpc/2264586.html). Of course, like many policy issues, the question of whether or not the current investment in hardware R&D for HPC is well matched by a corresponding investment in essential software infrastructure is more than a little slippery. But even if I didn’t think Wolfe’s counter claim missed the mark in some important respects, which I do, I very much agree that the discussion he aims to provoke is one that is vitally important for our community to pursue. So it seems like a good time to try to make the idea I was expressing a little clearer.
It is worth noting that the remark in question from SC07 was not original with me. It represents a view that has been percolating through the community for a decade at least. To take a recent example, in the PITAC report of 2005 (see http://www.nitrd.gov/pitac/reports/20050609_computational/computational.pdf), we wrote that the HPC community’s preoccupation with peak performance and computing hardware, vital though it is, masks a troubling reality, namely that the most serious technical problems in computational science lie in software, usability, and the shortage of trained personnel. The twenty and thirty year life spans of major application codes, such as those studied by Doug Post (DoD, HPCMP) and his colleagues, are possible only because of the heroic efforts that scientific programmers repeatedly make to port them to new generations of hardware, using comparatively primitive software tools and programming models. Meanwhile, the fundamental R&D necessary to create balanced hardware-software systems that are easy to use, facilitate application expression in high-level models, and deliver large fractions of their peak performance on computational science applications is routinely postponed for a more opportune (but always elusive) time. Among its more insidious effects is the failure to overcome the intellectual challenges involved in creating such systems serves to exacerbate the scarcity of the broad education and training our community so desperately needs.
So this perceived imbalance in R&D investment in software infrastructure is long standing. It precedes and is largely independent of the mass migration to COTS platforms for HPC, which Wolfe finds so problematic. Even if this latter trend were reversed, and we were once again indulging our traditional fixation on HPC-tailored hardware, there is no reason to think the imbalance with regard to software tools, methods and infrastructure would be improved. On the contrary, there is every reason to think that it would be made worse.
The remarkable escalation in system complexity that we are currently experiencing is unlikely to recede under any circumstances, whether we stay with a COTS-based approach or not. The size and complexity of hardware systems (close to 500,000 cores in the largest hardware platform) continues to grow and only compounds the problems we face. Given the obstacles that now confront the processor design community — power wall, ILP wall, memory wall — I see no reason to believe that we can avoid the problems associated with exposing and managing order of magnitude increases in parallelism, even if the level of single thread performance were to remain completely stable and the investment in non-COTS designs (as desirable as they might be) were to be dramatically increased.
The point that I, along with many others in the HPC software community, continue to make is that the software base for computational science that we currently have is inadequate to keep pace with and support evolving hardware and application needs. I happily concede that the movement toward all-COTS HPC is problematic in various ways, some of which Wolfe mentions. But focusing on that fact only obscures the deeper problem that I am concerned with. Regardless of which path we take to achieve progress in hardware performance, chronic under investment in enabling software and applications forces researchers to build atop crumbling and inadequate foundations rather than on a modern, high-quality software base. The result is diminishing productivity for researchers and computing systems alike. Moreover, this condition is not susceptible of short-term solutions. One need only look at the development history of any large-scale software system to recognize the importance of an iterated cycle of development, deployment and feedback to develop an effective, widely used product. Consequently, achieving better balance in the HPC ecosystem will require sustained investment, long-term research and the opportunity to incorporate the lessons learned from a relatively long series of well considered iterations.
While many of us may have reservations (and excitement) about the growing onslaught of COTS multicore chip architectures, with all their attendant complexities, and the proliferation of petascale systems based on them, does anyone seriously believe that this movement will abate anytime soon? On the contrary, it gives every indication of being inescapable. If that’s the case, we should expect that the consequences of the longstanding imbalance in the HPC ecosystem will soon be thrown into much sharper relief and the discussion of how to recover from it will take on much greater urgency.
University of Tennessee
Oak Ridge National Laboratory
University of Manchester