Revisiting Supercomputer Architectures

By Chris Willard

December 8, 2011

The chronology of high performance computing can be divided into “ages” based on the predominant systems architectures for the period. Starting in the late 1970s vector processors dominated HPC. By the end of the next decade massively parallel processors were able to make a play for market leader. For the last half of the 1990s, RISC based SMPs were the leading technology. And finally, clustered x86 based servers captured market priority in the early part of this century. 

This architectural path was dictated by the technical and economic effect of Moore’s Law. Specifically, the doubling of processor clock speed every 18 to 24 months meant that without doing anything, applications also roughly doubled in speed at the same rate. One effect of this “free ride” was to drive companies attempting to create new HPC architectures from the market. Development cycles for new technology simply could not outpace Moore’s Law-driven gains in commodity technology, and product development costs for specialized systems could not compete against products sold to volume markets.

The more general-purpose systems were admittedly not the best architectures for HPC users’ problems. However commodity component based computers were inexpensive, could be racked and stacked, and were continually getting faster. In addition, users could attempt to parallelize their applications across multiple compute nodes to get additional speed ups. In a recent Intersect360 study, users reported a wide range of scalable applications, with some using over 10,000 cores, but with the median number of cores used by a typical HPC application of only 36 cores.

In the mid 2000s, Moore’s Law went through a major course correction. While the number of transistors on a chip continued to double on schedule, the ability to increase clock speed hit a practical barrier — “the power wall.” The exponential increase in power required to increase processor cycle times hit practical cost and design limits. The power wall led to clock speeds stabilizing at roughly 3GHz and multiple processor cores being placed on a single chip with core counts now ranging from 2 to 16. This ended the free ride for HPC users based on ever faster single-core processors and is forcing them to rewrite applications for parallelism.

In addition to the power wall, the scale out strategy of adding capacity by simply racking and stacking more compute server nodes caused some users to hit other walls, specifically the computer room wall (or “wall wall”) where facilities issues became a major problem. These include physical space, structural support for high density configurations, cooling, and getting enough electricity into the building.

The market is currently looking to a combination of four strategies to increase the performance of HPC systems and applications: parallel applications development; adding accelerators to standard commodity compute nodes; developing new purpose-built systems; and waiting for a technology breakthrough.

Parallelism is like the “little girl with the curl,” when parallelism is good it is very, very good, and when it is bad it is horrid. Very good parallel applications (aka embarrassingly parallel) fall into such categories as: signal processing, Monte Carlo analysis, image rendering, and the TOP500 benchmark. The success of these areas can obscure the difficulty in developing parallel applications in other areas. Embarrassingly parallel applications have a few characteristics in common:

  • The problem can be broken up into a large number of sub-problems.
  • These sub-problem are independent of one another, that is they can be solved in any order and without requiring any data transfer to or from other sub-problems,
  • The sub-problems are small enough to be effectively solved on whatever the compute node du jour might be.

When these constraints break down, the programming problem first becomes interesting, then challenging, then maddening, then virtually impossible. The programmer must manage ever more complex data traffic patterns between sub-problems, plus control the order of operations of various tasks, plus attempt to find ways to break larger sub-problems into sub-sub-problems, and so on. If this were easy it would have been done long ago.

Adding accelerators to standard computer architectures is a technique that has been used throughout the history of computer architecture development. Current HPC markets are experimenting with graphics processing units (GPUs) and to a lesser extent field programmable gate arrays (FPGAs).

GPUs have long been a standard component in desktop computers. GPUs are of interest for several reasons: they are inexpensive commodity components, they have fast independent memories, and they provide significant parallel computational power.

FPGAs are standard devices long in use within the electronics industry for quickly developing and fielding specialty chips that are often replaced in products by standard ASICs over time. FPGAs allow HPC users to essentially customize the computer to the requirements of their applications. In addition they should benefit from Moore’s Law advancements over time.

Challenges for accelerator-based systems stem from a single program being run over two different processing devices, one a general-purpose processor with limited speed, and the other an accelerator with high processing speed but with limited overall functionality. Challenges fall into three major areas:

  • Programming — Computers can be built to arbitrarily high levels of complexity, however the average complexity of computer programmers remains a constant. Accelerators add two levels of complexity for applications development, first writing a single program that is divided between two different processor types, and second, writing a program that can take advantage of the specific characteristics of the accelerator.
  • Control and communications — Performance gains from accelerations can be diminished or lost from compute overhead generated from setting up the problem on the accelerator, moving data between the standard processor and the accelerator, and coordinating the operations of both compute units.
  • Data management — Programming complexity is increased and performance is reduced in cases where the standard processor and accelerator use separate independent memories. Issues for managing data across multiple processors range from determining proper data decomposition, to efficiently moving data in and out of the proper memories, to stalling processes while waiting on data from another memory, to debugging programs where it is unclear which processor has last modified a data item.

Many of these issues are associated with parallel computing in general, however they are still significant for accelerator-based operations, and the close coupling between the processor and the accelerator may require programmers to have a deep understanding of the behavior of the physical hardware components.

Purpose-built systems are systems that are designed to meet the requirements of HPC workflows. (These systems were initially called supercomputers.) In today’s market, new HPC architectures still make use of commodity components such as processor chips, memory chips/DIMMS, accelerators, I/O ports, and so on. However they introduce novel technologies in such areas as:

  • Memory subsystems — Arguably the most important part of any HPC computer is the memory system. HPC applications tend to stream a few large data sets from storage through memory, into processors, and back again for a normal workflow. In addition, such requirements as spare matrix calculations lead to requirements for fast access to non-contiguous data elements. The speed at which the data can be moved is the determining factor in the ultimate performance in a large portion, if not the majority, of HPC applications.
  • Parallel system interconnects — Parallel computer essentially address the memory bandwidth problem by creating a logically two dimension memory structure, one dimension is within nodes. i.e., between a nodes local memory and local processors. Total bandwidth in this case is the sum off all node bandwidths and is very high. The second dimension is the node to node interconnect, which is essentially a specialized local area network that is significantly slower in both bandwidth and latency measures than local node memories. As applications become less embarrassingly parallel the communications over the interconnect increases, and the interconnect performance tends to become the limiting factor in overall applications performance.
  • Packaging — The speed of computer components. i.e., processors and memories can be increased by reducing the temperature at which they run. In addition, parallel computing latency issues can be addressed by simply packing nodes closer together, which requires both fitting more wires into a smaller space, and removing high amounts of heat from relatively small volumes.

Developing specialized HPC architectures has, up until recently, been limited by the effects of Moore’s Law, which has shortened product cycle times for standard products, and limited market opportunities for specialized systems. Those HPC architecture efforts that have gone forward have generally received support from government and/or large corporation R&D funds.

Waiting for a technology breakthrough (or the “then a miracle happens” strategy) is always an alternative; it is also the path of least resistance, and one step short of despair. Today we are looking at such technologies as optical computing, quantum entanglement communications, and quantum computers for potential future breakthroughs.

The issue with relying on future technologies is there is no way to tell first, if a technology concept can be turned into viable a product — there is many a slip between the lab and loading dock. Second, even if it can be shown that a concept can be productized, it is virtually impossible to predict when the product will actually reach the market. Even products based on well understood production technologies can badly overrun schedules, sometimes bringing to grief those vendors and users who bet on new products.

The above arguments suggests that the next age of high performance computing could be based on anything from reliance on clusters with speed boosts add-ons, to a brave new computer based on technologies that may not have been heard of yet. (You can never go wrong with a forecast like that.) That said, I am willing to lay odds on purpose-built computers becoming a major component, if not the defining technology of the HPC market within the next five years, for two major reasons.

First, there is no “easy” technical solution. Single thread performance has plateaued; the usefulness of accelerators is dependent on both the parallelism inherent to the application and the connectivity between the accelerator and the rest of the system; and parallelism, while an advantage where it can be found, is not a panacea for computing performance.

Second, the economics of HPC system development have changed. Users cannot simply sit back and wait for a faster CPU, but must make significant investments in either new software, or new architectures, or both. Staying with old economic models will lead to the computation tools defining the science, where work will be restricted to those areas that will run well on off-the-shelf computers.

The HPC market is at a point where the business climate will support greater levels of innovation at the architectural level, which should lead to new organizing principle for HPC systems. The goal here is to find new approaches that will effectively combine and optimize the various standard components into systems that can continue to grow performance across a broad range of applications.

Of course we can always wait for a miracle to happen.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 8, 2016)

December 8, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This