For all the accolades one hears about German engineering, there are few IT vendors native to that country. Recently though, we got the opportunity to talk with one such company, ParStream, a Cologne-based startup that has developed a bleeding-edge CPU/GPU-based analytics platform that marries high performance computing to big data.
For the second time in five years, Appro has been tapped to provide the National Nuclear Security Administration with HPC capacity clusters for the agency’s Advanced Simulation and Computing and stockpile stewardship programs. The Tri-Lab Linux Capacity Cluster 2 award is a two-year contract that will have the cluster-maker delivering HPC systems across three of the Department of Energy’s national labs. The deal is worth tens of millions of dollars to Appro and represents the biggest contract in the company’s 20-year history.
Watson’s decisive win over two of Jeopardy’s top champions on national television earlier this year could turn out to be the most effective infomercial in the history of IT. Capitalizing on that accomplishment, IBM is working hard to highlight the supercomputing technology at every opportunity, including this week’s rollout of new and improved Power7-based servers.
New crop of Chinese supercomputers will feature homegrown chips.
In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions. This puts a new kind of pressure on the memory subsystem.
Yes, there is life beyond Xeons, Opterons and GPGPUs.
Top seven supercomputers make it into the petaflop club.
The tension between custom and commodity high performance computing has shaped both market approaches.
Integrated graphics threatens GPU pricing subsidies.
HotPar workshop spotlights latest work in parallelism.