While advancing the field of HPC into the exascale era is beset by many obstacles, resiliency might be the most thorny of all. As the number of cores proliferate so too do the number of incorrect behaviors, threatening not just the operation of the machine, but the validity of the results as well. When you
CEO Pete Manca details Egenera’s unusual journey from hardware vendor to software provider.
During the International Supercomputing Conference, Bull’s Matthew Foxton sounded an alarm bell for the European supercomputing community with his statement that all the R&D will not prove useful to Europe’s future without a solid investment in the “D”–not just the “R”.
Photorealistic rendering for design and animation is pushing multicore processors to their limit with key software advancements.
A recent effort led by Cycle Computing based on the SHOC benchmark revealed equal performance between GPU-accelerated cloud and native hardware.
With exascale predictions all the rage, here’s a more sobering look at the next big thing in supercomputing.
The idea that HPC in the cloud should be simple and fulfill the true promise of instant, on-demand resources without effort is faulty, according to Joe Landman, who argues that customer expectations are not meeting with HPC cloud realities.
Achieving workable software-based fault tolerance will require a fresh approach for developers.
Big Blue sees green in mainstream high performance computing market.
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.