Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: parallel computing

A Comparison of Heterogeneous and Manycore Programming Models

Mar 2, 2015 |

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution. These nodes might be comprised of many identical compute cores in multiple coherency domains, or Read more…

BSC, Intel Extend Exascale Research Effort

Feb 17, 2015 |

Intel’s efforts to advance exascale computing concepts received a boost with the extension of the company’s research collaboration with the Barcelona Supercomputing Center (BSC) – one of four Intel exascale labs in Europe. Begun in 2011 and now extended to September 2017, the Intel-BSC work focuses on scalability issues with parallel applications. “[A major goal] Read more…

The Democratization of Parallel Computing

Aug 20, 2014 |

Virginia Tech College of Engineering Professor Wu Feng has a vision to broadly apply parallel computing to advance science and address major challenges. A recent expose on Feng’s work details his involvement with the NSF, Microsoft, and the Air Force using innovative computing techniques to solve problems. “Delivering personalized medicine to the masses is just Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

Stanford Lights Up One Million Sequoia Cores

Jan 28, 2013 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Stanford_jet_noise_simulation_150x.jpg” alt=”” width=”95″ height=”54″ />The 20 petaflop, third-generation IBM BlueGene system, Sequoia, may be the number two supercomputer according to the latest TOP500 rankings, but when it comes to max core usage, Sequoia has apparently set a new record. A team of Stanford engineers harnessed one million of Sequoia’s nearly 1.6 CPUs in parallel to solve a sophisticated fluid dynamics problem.

Kepler GPU Makes Quick Work of Quicksort

Sep 13, 2012 |

Dynamic parallelism enables the graphics processor to act more like a CPU.

Intel Adds Programming Support for Latest Silicon

Sep 6, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Parallel_Studio_Cluster_XE_2013_small.bmp” alt=”” width=”146″ height=”96″ />We’re only a little more than halfway through 2012, but Intel has already announced the 2013 versions Parallel Studio XE and Cluster Studio XE, two software suites that support x86-based parallel programming for high performance computing and beyond. Intel refreshes their software development offerings each year at about this time to sync up its tool support with the latest and greatest silicon and to add new features for developers.

Microsoft Cranks its AMP

Feb 7, 2012 |

Software maker offers heterogeneous computing in a C++ wrapper.

Revisiting Supercomputer Architectures

Dec 8, 2011 |

Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the “wall wall”). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough.

NVIDIA Eyes Post-CUDA Era of GPU Computing

Dec 7, 2011 |

Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.