What’s New in Computing vs. COVID-19: Cerebras, Nvidia, OpenMP & More

May 18, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporation Read more…

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

Optimizing Codes for Heterogeneous HPC Clusters Using OpenACC

July 3, 2017

Looking at the Top500 and Green500 ranks, one clearly realizes that most HPC systems are heterogeneous architecture using COTS (Commercial Off-The-Shelf) hardware, combining traditional multi-core CPUs with massively parallel accelerators, such as GPUs and MICs. With processor frequencies now hitting a solid wall, the only truly open avenue for riding today the Moore’s law is increasing hardware parallelism in several different ways: more computing nodes, more processors in each node, more cores within each processor, and longer vector instructions in each core. Read more…

Compilers and More: OpenACC to OpenMP (and back again)

June 29, 2016

In the last year or so, I’ve had several academic researchers ask me whether I thought it was a good idea for them to develop a tool to automatically convert OpenACC programs to OpenMP 4 and vice versa. In each case, the motivation was that some systems had OpenMP 4 compilers (x86 plus Intel Xeon Phi Knights Corner) and others had OpenACC (x86 plus NVIDIA GPU or AMD GPU), and someone wanting to run a program across both would need two slightly different programs. In each case, the proposed research sounded like a more-or-less mechanical translation process, something more like a sophisticated awk script, and that’s doomed from the start. I will explain below in more detail how I came to this conclusion. Read more…

A Comparison of Heterogeneous and Manycore Programming Models

March 2, 2015

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size Read more…

New Degrees of Parallelism, Old Programming Planes

August 28, 2014

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What thi Read more…

Parallel Programming with OpenMP

July 31, 2014

One of the most important tools in the HPC programmer's toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The c Read more…

A Data Locality Cure for Irregular Applications

February 18, 2014

Data locality plays a critical role in energy-efficiency and performance in parallel programs. For data-parallel algorithms where locality is abundant, it is a Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Streamlining AI Data Management

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by DDN

Whitepaper

Taking research further with extraordinary compute power and efficiency

Karlsruhe Institute of Technology (KIT) is an elite public research university located in Karlsruhe, Germany and is engaged in a broad range of disciplines in natural sciences, engineering, economics, humanities, and social sciences. For institutions like KIT, HPC has become indispensable to cutting-edge research in these areas.

KIT’s HoreKa supercomputer supports hundreds of research initiatives including a project aimed at predicting when the Earth’s ozone layer will be fully healed. With HoreKa, projects like these can process larger amounts of data enabling researchers to deepen their understanding of highly complex natural processes.

Read this case study to learn how KIT implemented their supercomputer powered by Lenovo ThinkSystem servers, featuring Lenovo Neptune™ liquid cooling technology, to attain higher performance while reducing power consumption.

Download Now

Sponsored by Lenovo

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire