Tag: parallel computing
Virginia Tech College of Engineering Professor Wu Feng has a vision to broadly apply parallel computing to advance science and address major challenges. A recent expose on Feng’s work details his involvement with the NSF, Microsoft, and the Air Force using innovative computing techniques to solve problems. “Delivering personalized medicine to the masses is just Read more…
One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Stanford_jet_noise_simulation_150x.jpg” alt=”” width=”95″ height=”54″ />The 20 petaflop, third-generation IBM BlueGene system, Sequoia, may be the number two supercomputer according to the latest TOP500 rankings, but when it comes to max core usage, Sequoia has apparently set a new record. A team of Stanford engineers harnessed one million of Sequoia’s nearly 1.6 CPUs in parallel to solve a sophisticated fluid dynamics problem.
Dynamic parallelism enables the graphics processor to act more like a CPU.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Parallel_Studio_Cluster_XE_2013_small.bmp” alt=”” width=”146″ height=”96″ />We’re only a little more than halfway through 2012, but Intel has already announced the 2013 versions Parallel Studio XE and Cluster Studio XE, two software suites that support x86-based parallel programming for high performance computing and beyond. Intel refreshes their software development offerings each year at about this time to sync up its tool support with the latest and greatest silicon and to add new features for developers.
Software maker offers heterogeneous computing in a C++ wrapper.
Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the “wall wall”). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough.
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.
Rogue Wave Software has acquired HPC toolmaker Acumen AB, a Swedish company that makes performance optimization tools for multithreaded applications. Acumem brought its first products to market in 2008, based on technology developed by Erik Hagersten and his research team at Uppsala University. Acumem’s product set and engineering group will be retained, along with the company’s office in Uppsala, Sweden.
Last week’s High Performance Computing Financial Markets conference in New York gave Microsoft an opening to announce the official release of Windows HPC Server 2008 R2, the software giant’s third generation HPC server platform. It also provided Microsoft a venue to spell out its technical computing strategy in more detail, a process the company began in May.