Supercomputing’s Golden Age?
Compared to overall IT spending, which has seen only modest increases lately, the supercomputing market is quite healthy. Last month, IDC reported that sales of the largest systems jumped 29.3 percent to $5.6 billion from 2011 to 2012, while worldwide factory revenue for the high-performance computing technical server space rose 7.7 percent in the same timeframe.
Is supercomputing experiencing a golden age? It is according to News Observer’s Paul Gilster, who observes that “we consumers tend to be fixated on the latest smartphones and tablets, but when it comes to transforming our lives, big computing is where the action is.” In order to keep pace with ever-increasing compute and data streams, HPC has to move fast. This means that even the most influential machines might not be around for long. Take Roadrunner for example.
For some, it’s disconcerting that the record-breaking FLOPS-crunching IBM Roadrunner supercomputer could go from king of supercomputing to obsolete in a mere five years. Although the first petascale-class system is still mighty fast, its 296 racks consume too much energy compared to newer designs, plus the Cell-based system is not very programmer-friendly.
Energy-efficiency is one of the most important metrics for current and future designs, and it’s a core focus of the Dome project. IBM and the Netherlands Institute for Radio Astronomy (Astron) are working to create a group of systems that can process all the data coming from the Square Kilometer Array telescope. Expected to go online in 2024, the telescope will straddle 3,000 individual receivers across South Africa and Australia, linked by fiber-optic. SKA will generate the equivalent of today’s Internet traffic volume twice each day.
Data requirements are driving the need toward bigger compute and storage systems. The SKA, for example, will need to store 1,500 petabytes of data each year. The Large Hadron Collider in Switzerland, which is also considered a big data generator, creates about 15 petabytes per year.
Because of big compute and big data needs, chip companies are working to create low-power chips that produce more FLOPS/watt. As part of IBM’s green supercomputing push, the company is looking to design systems with minimal power footprints that can transfer massive amounts of data using new optical methods.
Replacing costly “real-world” experiments with less-expensive virtual modeling continues to drive demand for high-end systems. For example, the Cray Blue Waters supercomputer will simulate 60 million atoms in order to better understand the HIV genome.
If supercomputing is experiencing a golden age, how long will it continue? Continued funding cuts at the federal level are threatening the pace of progress and exascale goalposts continue to be pushed forward in time. Still, HPC is an enabling technology, one that helps companies and countries alike improve their competitive stance. Add this to the contest-like aspect of the TOP500 list, and it’s almost guaranteed that certain countries will find a way to maintain list dominance. It’s also possible that this big system spending will come at the expense of smaller systems.