Cloud computing has been a consistent item on the programs of a number of Electronic Design Automation (EDA) industry events. To add to the discussion we explored the current role of clouds and looked to the future with five EDA executives to gauge their sense of cloud adoption and the reasons behind (or working against) it.
The White House hosted a press conference on Wednesday to announce a new public-private partnership that aims to bring HPC technology to the have-nots of the US manufacturing sector. Using a $2 million grant from the US Department of Commerce and an additional $2.5 million investment from industrial partners, a consortium has been formed to broaden the use of HPC technology by small manufacturing enterprises (SMEs).
NVIDIA is set to release a new CUDA toolkit to developers this Friday with the 4th generation of its popular GPU software suite. The company says CUDA 4.0 is designed to make parallel programming simpler, thus bringing more application developers into the GPGPU fold. Some of the new capabilities also foreshadow Project Denver, the codename for the company’s future CPU-GPU architecture for workstations, servers, and supercomputers.
With exascale predictions all the rage, here’s a more sobering look at the next big thing in supercomputing.
Automated trading software runs amok.
With petascale systems now deployed on three continents, the HPC industry is already looking toward the next milestone in supercomputing: exascale computing. In Europe, this activity is centered on the European Exascale Software Initiative (EESI), a project that brings together industry and government organizations committed to helping usher in the transition from petascale to exascale systems over the next decade.
Achieving workable software-based fault tolerance will require a fresh approach for developers.
Big Blue sees green in mainstream high performance computing market.
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.
Chinese Tianhe-1A supercomputer exploits GPU power to deliver 2.5 petaflops; and Cray nabs a $60 million contract with the University of Stuttgart. We recap those stories and more in our weekly wrapup.