What’s New in Computing vs. COVID-19: Cerebras, Nvidia, OpenMP & More

May 18, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporation Read more…

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

Optimizing Codes for Heterogeneous HPC Clusters Using OpenACC

July 3, 2017

Looking at the Top500 and Green500 ranks, one clearly realizes that most HPC systems are heterogeneous architecture using COTS (Commercial Off-The-Shelf) hardware, combining traditional multi-core CPUs with massively parallel accelerators, such as GPUs and MICs. With processor frequencies now hitting a solid wall, the only truly open avenue for riding today the Moore’s law is increasing hardware parallelism in several different ways: more computing nodes, more processors in each node, more cores within each processor, and longer vector instructions in each core. Read more…

Compilers and More: OpenACC to OpenMP (and back again)

June 29, 2016

In the last year or so, I’ve had several academic researchers ask me whether I thought it was a good idea for them to develop a tool to automatically convert OpenACC programs to OpenMP 4 and vice versa. In each case, the motivation was that some systems had OpenMP 4 compilers (x86 plus Intel Xeon Phi Knights Corner) and others had OpenACC (x86 plus NVIDIA GPU or AMD GPU), and someone wanting to run a program across both would need two slightly different programs. In each case, the proposed research sounded like a more-or-less mechanical translation process, something more like a sophisticated awk script, and that’s doomed from the start. I will explain below in more detail how I came to this conclusion. Read more…

A Comparison of Heterogeneous and Manycore Programming Models

March 2, 2015

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size Read more…

New Degrees of Parallelism, Old Programming Planes

August 28, 2014

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What thi Read more…

Parallel Programming with OpenMP

July 31, 2014

One of the most important tools in the HPC programmer's toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The c Read more…

A Data Locality Cure for Irregular Applications

February 18, 2014

Data locality plays a critical role in energy-efficiency and performance in parallel programs. For data-parallel algorithms where locality is abundant, it is a Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

From Hallucination to Reality

As Federal agencies navigate an increasingly complex and data-driven world, learning how to get the most out of high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) technologies is imperative to their mission. These technologies can significantly improve efficiency and effectiveness and drive innovation to serve citizens' needs better. Implementing HPC and AI solutions in government can bring challenges and pain points like fragmented datasets, computational hurdles when training ML models, and ethical implications of AI-driven decision-making. Still, CTG Federal, Dell Technologies, and NVIDIA unite to unlock new possibilities and seamlessly integrate HPC capabilities into existing enterprise architectures. This integration empowers organizations to glean actionable insights, improve decision-making, and gain a competitive edge across various domains, from supply chain optimization to financial modeling and beyond.

Download Now

Sponsored by CGT Federal

Whitepaper

Why IT Must Have an Influential Role in Strategic Decisions About Sustainability

Data centers are experiencing increasing power consumption, space constraints and cooling demands due to the unprecedented computing power required by today’s chips and servers. HVAC cooling systems consume approximately 40% of a data center’s electricity. These systems traditionally use air conditioning, air handling and fans to cool the data center facility and IT equipment, ultimately resulting in high energy consumption and high carbon emissions. Data centers are moving to direct liquid cooled (DLC) systems to improve cooling efficiency thus lowering their PUE, operating expenses (OPEX) and carbon footprint.

This paper describes how CoolIT Systems (CoolIT) meets the need for improved energy efficiency in data centers and includes case studies that show how CoolIT’s DLC solutions improve energy efficiency, increase rack density, lower OPEX, and enable sustainability programs. CoolIT is the global market and innovation leader in scalable DLC solutions for the world’s most demanding computing environments. CoolIT’s end-to-end solutions meet the rising demand in cooling and the rising demand for energy efficiency.

Download Now

Sponsored by Lenovo

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire