Tag: Linpack benchmark
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Blue_Waters_NCSA.jpg” alt=”” width=”110″ height=”53″ />NCSA’s Blue Waters system is one of the fastest supercomputers in the world, but it won’t be appearing on the TOP500 list, nor will it be taking part in the HPC Challenge awards. HPCwire spoke with Project Director Bill Kramer to get the full story on this important decision.
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications.
Given the recent ascent of the GPU-powered Tianhe-1A system to the top of the supercomputing heap, a current paper from Department of Computer Science at the University of Warwick should be of particular interest to those in the market for a petascale supercomputer. Essentially their study asks the question: As an organization, should I commit to a platform based on general-purpose GPUs or an IBM Blue Gene?
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.