There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.
Chinese Tianhe-1A supercomputer exploits GPU power to deliver 2.5 petaflops; and Cray nabs a $60 million contract with the University of Stuttgart. We recap those stories and more in our weekly wrapup.
Supercomputing apps may have to ditch the checkpoint-restart model.
It’s not simply about applications and cloud computing environments. In the context of HPC challenges; it is not simply hardware challenges but ability of the cloud to meet those challenges
As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!
Sun and SGI have followed similar trajectories throughout much of their linked histories, but it seems that’s about to change.
In the big picture HPC hardware is a small part of the economic equation. What are the big parts?
The growing popularity of cloud computing is creating opportunities for companies that make the hardware used by cloud’s large, scale-out datacenters.