STREAM Benchmark Author McCalpin Traces System Balance Trends

November 7, 2016

When Dr. John D. McCalpin introduced the STREAM benchmark in 1991, it had already become become clear that peak arithmetic rate was not an adequate measure of HPC system performance for many applications. Since then, CPU performance has continued to outpace memory performance measures, leading to the processor-memory speed gap, known as the memory wall. In an invited talk at SC16, McCalpin will trace the history of changing “balances” between computation, memory latency, and memory bandwidth and will explore the impact to the next-generation of HPC systems. Read more…

By Tiffany Trader

Future Challenges of Large-Scale Computing

April 15, 2013

Ahead of his opening conference keynote at ISC'13, Bill Dally, chief scientist at NVIDIA and senior vice president of NVIDIA Research, shares his views on where HPC is headed. Among the key topics covered are the demand for heterogenous computing, overcoming the memory wall, the implications of government belt-tightening, and much more... Read more…

By Nicole Hemsoth

Micron Readies Hybrid Memory Cube for Debut

January 17, 2013

The next-generation memory-maker Micron Technology was one of the many innovative companies demonstrating its wares on the Supercomputing Conference (SC12) show floor last November. Micron's General Manager of Hybrid Technology Scott Graham was on hand to discuss the latest developments in their Hybrid Memory Cube (HMC) technology, a multi-chip module that aims to address one of the biggest challenges in high performance computing: scaling the memory wall. Read more…

By Tiffany Trader

Hybrid Memory Cube Angles for Exascale

July 10, 2012

Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing. Read more…

By Michael Feldman

HP Scientists Envision 10-Teraflop Manycore Chip

March 15, 2012

In high performance computing, Hewlett-Packard is best known for supplying bread-and-butter HPC systems, built with standard processors and interconnects. But the company's research arm has been devising a manycore chipset, which would outrun the average-sized HPC cluster of today. The design represents a radical leap in performance, and if implemented, would fulfill the promise of exascale computing. Read more…

By Michael Feldman

Revisiting Supercomputer Architectures

December 8, 2011

Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the "wall wall"). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough. Read more…

By Chris Willard

Finding the Door in the Memory Wall, Part 2

March 23, 2009

It is a common belief that only sequential applications need to be adapted for parallel execution on multicore processors. However, many existing parallel algorithms are also a poor fit. They have simply been optimized for the wrong design parameters. Read more…

By Erik Hagersten

Finding the Door in the Memory Wall, Part 1

March 3, 2009

As cores proliferate on CPUs, the memory wall rises higher and applications have an increasingly difficult task of using processor resources efficiently. But hardware alone is not to blame. Making the software more efficient may be the simplest and least expensive way to save power and resources on modern multicore architectures. Read more…

By Erik Hagersten

  • arrow
  • Click Here for More Headlines
  • arrow

Leading Solution Providers

Whitepaper:

Strategies for the Spectrum of Cloud Adoptions

Whether an organization chooses a cloud for general business needs or a highly tailored workload, the spectrum of offerings and configurations can be overwhelming. To help you navigate the various cloud options available today, we're breaking down your options, exploring pros and cons, and sharing ways to keep your options open and your business agile as you execute your cloud strategy.

Download this report

Sponsored by Microsoft

Whitepaper:

Adaptive Flexibility is the Future of Supercomputing – The Arm advantage for HPC workloads

Researchers in academic labs and commercial R&D groups continue to need more compute capacity, which means leveraging the latest innovations in HPC technologies as well as an assortment of resources to meet the unique needs of different workloads. Increasingly, systems based on Arm processors are stepping into that role, offering low power consumption and strategic advantages for HPC workloads.

Download this report

Sponsored by Cray

Whitepaper:

HPC Goes Mainstream

Whether it's for fraud detection, personalized medicine, manufacturing, smart cities, autonomous vehicles and many other areas, advanced-scale computing has exploded beyond the realm of academia and government and into the private sector. And with data-intensive workloads on the rise, commercial users are turning to HPC-based infrastructure to run the AI, ML and cognitive computing applications that their organizations depend on.

Download this report

Sponsored by SUSE

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

Subscribe