<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/exludus_logo.jpg” alt=”” width=”139″ height=”32″ />The advent of multicore servers presents something of a challenge for application virtualization. This is especially true in the realm of high performance computing, an environment that has never been particularly friendly to virtualization. To overcome these hurdles, eXludus Technologies has introduced “micro-virtualization,” a technology that brings virtualization down to the level of the core, and does so with minimal overhead.
As processor core counts rise, MIT research suggests on-chip networks will be needed.
MIT’s Hornet simulator takes the sting out of manycore design.
In a recent article in the HPC Source magazine, Wolfgang Gentzsch discusses the good, the bad, and the ugly of multicore processors.
A new language could improve the quality of parallel code and automate some of the trickiest elements of multicore programming.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray’s first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation.
Researchers mitigate multicore challenges to refine current geological simulation capabilities.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Bull’s third petascale computing contract; IBM’s new POWER7 servers, the first hybrid spintronics computer chips, Bull and Whamcloud’s beefed-up Lustre support; and Tilera’s latest manycore development tools.
In his third column on programming for exascale systems, Michael Wolfe shares his views on what programming at the exascale level is likely to require, and how we can get there from where we are today. He explains that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.
In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.