Tag: parallelism

James Reinders: Parallelism Has Crossed a Threshold

Feb 4, 2016 |

Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore Read more…

A Conversation with James Reinders

Jan 21, 2016 |

As Chief Evangelist of Intel Software Products, James Reinders spends most of his working hours thinking about and promoting parallel programming. He’s essentially a professor at large, attuning himself to the needs of software developers with an interest in parallel programming so he can offer guidance on techniques, ways of learning, and ways to “think parallel” – all with Read more…

Intel Haswell-EX Server Sets STAC-A2 Performance Record

Sep 2, 2015 |

Intel has reasserted its prominence on a subset of financial benchmarks designed to evaluate platforms for the pricing and market risk analytics. More powerful Xeons — “Haswell-EX” E7-8890 v3 processors — combined with changes to the software stack enabled Intel to set a new speed record on the STAC-A2 benchmark for both warm and cold Read more…

COSMOS Team Achieves 100x Speedup on Cosmology Code

Aug 24, 2015 |

One of the most popular sessions at the Intel Developer Forum last week in San Francisco, and certainly one of the most exciting from an HPC perspective, brought together two of the world’s foremost experts in parallel programming to discuss current state-of-the-art methods for leveraging parallelism on processors and coprocessors. The speakers, Intel’s Jim Jeffers and Read more…

Moving Down the Path Toward Code Modernization

Aug 19, 2015 |

“Code modernization” is a hot topic. While it is widely understood that applications need to evolve with hardware, there is lot of attention to evaluating how to do that well. Savvy customers ask about both portability and performance portability. Because the time and expertise to do the work is scarce and the willingness to do Read more…

Compilers and More: The Past, Present and Future of Parallel Loops

Apr 6, 2015 |

Let’s talk about parallel loops. In parallel computing, we’ve been designing, describing, implementing and using parallel loops almost since the beginning. The advantage of parallel loops (over almost all other forms of parallelism) is that the parallelism scales up with the data set size or loop trip count (number of iterations). So what exactly is a parallel Read more…

A Comparison of Heterogeneous and Manycore Programming Models

Mar 2, 2015 |

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution that can use all the resources in the node. These nodes might be comprised of Read more…

ROSE Framework Blooms Toward Exascale

Feb 12, 2015 |

One of the many ways that the Office of Advanced Scientific Computing Research (ASCR) supports the Department of Energy Office of Science facilities is by championing the research that powers computational science. A recent ASCR Discovery feature takes a look at how the DOE science community is preparing for extreme-scale programming. As supercomputers reach exascale Read more…

Practical Advice for Knights Landing Coders

Feb 5, 2015 |

The National Energy Research Scientific Computing Center (NERSC) is on track to get its next supercomputer system, Cori, by mid-2016. While that’s more than a year away, it’s not too soon to start preparing for the new 30+ petaflops Cray machine, which will feature Intel’s next-generation Knights Landing architecture. So says Richard Gerber, Senior Science Read more…

Scalable Priority Queue Minimizes Contention

Feb 2, 2015 |

The multicore era has been in full-swing for a decade now, yet exploiting all that parallel goodness remains a prominent challenge. Ideally, compute efficiency would scale linearly with increased cores, but that’s not always the case. As core counts are only set to proliferate across the computing spectrum, it’s an issue that merits serious attention. Researchers from Read more…