Visit additional Tabor Communication Publications
December 08, 2011
The chronology of high performance computing can be divided into "ages" based on the predominant systems architectures for the period. Starting in the late 1970s vector processors dominated HPC. By the end of the next decade massively parallel processors were able to make a play for market leader. For the last half of the 1990s, RISC based SMPs were the leading technology. And finally, clustered x86 based servers captured market priority in the early part of this century.
This architectural path was dictated by the technical and economic effect of Moore's Law. Specifically, the doubling of processor clock speed every 18 to 24 months meant that without doing anything, applications also roughly doubled in speed at the same rate. One effect of this "free ride" was to drive companies attempting to create new HPC architectures from the market. Development cycles for new technology simply could not outpace Moore's Law-driven gains in commodity technology, and product development costs for specialized systems could not compete against products sold to volume markets.
The more general-purpose systems were admittedly not the best architectures for HPC users' problems. However commodity component based computers were inexpensive, could be racked and stacked, and were continually getting faster. In addition, users could attempt to parallelize their applications across multiple compute nodes to get additional speed ups. In a recent Intersect360 study, users reported a wide range of scalable applications, with some using over 10,000 cores, but with the median number of cores used by a typical HPC application of only 36 cores.
In the mid 2000s, Moore's Law went through a major course correction. While the number of transistors on a chip continued to double on schedule, the ability to increase clock speed hit a practical barrier -- "the power wall." The exponential increase in power required to increase processor cycle times hit practical cost and design limits. The power wall led to clock speeds stabilizing at roughly 3GHz and multiple processor cores being placed on a single chip with core counts now ranging from 2 to 16. This ended the free ride for HPC users based on ever faster single-core processors and is forcing them to rewrite applications for parallelism.
In addition to the power wall, the scale out strategy of adding capacity by simply racking and stacking more compute server nodes caused some users to hit other walls, specifically the computer room wall (or "wall wall") where facilities issues became a major problem. These include physical space, structural support for high density configurations, cooling, and getting enough electricity into the building.
The market is currently looking to a combination of four strategies to increase the performance of HPC systems and applications: parallel applications development; adding accelerators to standard commodity compute nodes; developing new purpose-built systems; and waiting for a technology breakthrough.
Parallelism is like the "little girl with the curl," when parallelism is good it is very, very good, and when it is bad it is horrid. Very good parallel applications (aka embarrassingly parallel) fall into such categories as: signal processing, Monte Carlo analysis, image rendering, and the TOP500 benchmark. The success of these areas can obscure the difficulty in developing parallel applications in other areas. Embarrassingly parallel applications have a few characteristics in common:
When these constraints break down, the programming problem first becomes interesting, then challenging, then maddening, then virtually impossible. The programmer must manage ever more complex data traffic patterns between sub-problems, plus control the order of operations of various tasks, plus attempt to find ways to break larger sub-problems into sub-sub-problems, and so on. If this were easy it would have been done long ago.
Adding accelerators to standard computer architectures is a technique that has been used throughout the history of computer architecture development. Current HPC markets are experimenting with graphics processing units (GPUs) and to a lesser extent field programmable gate arrays (FPGAs).
GPUs have long been a standard component in desktop computers. GPUs are of interest for several reasons: they are inexpensive commodity components, they have fast independent memories, and they provide significant parallel computational power.
FPGAs are standard devices long in use within the electronics industry for quickly developing and fielding specialty chips that are often replaced in products by standard ASICs over time. FPGAs allow HPC users to essentially customize the computer to the requirements of their applications. In addition they should benefit from Moore's Law advancements over time.
Challenges for accelerator-based systems stem from a single program being run over two different processing devices, one a general-purpose processor with limited speed, and the other an accelerator with high processing speed but with limited overall functionality. Challenges fall into three major areas:
Many of these issues are associated with parallel computing in general, however they are still significant for accelerator-based operations, and the close coupling between the processor and the accelerator may require programmers to have a deep understanding of the behavior of the physical hardware components.
Purpose-built systems are systems that are designed to meet the requirements of HPC workflows. (These systems were initially called supercomputers.) In today's market, new HPC architectures still make use of commodity components such as processor chips, memory chips/DIMMS, accelerators, I/O ports, and so on. However they introduce novel technologies in such areas as:
Developing specialized HPC architectures has, up until recently, been limited by the effects of Moore's Law, which has shortened product cycle times for standard products, and limited market opportunities for specialized systems. Those HPC architecture efforts that have gone forward have generally received support from government and/or large corporation R&D funds.
Waiting for a technology breakthrough (or the "then a miracle happens" strategy) is always an alternative; it is also the path of least resistance, and one step short of despair. Today we are looking at such technologies as optical computing, quantum entanglement communications, and quantum computers for potential future breakthroughs.
The issue with relying on future technologies is there is no way to tell first, if a technology concept can be turned into viable a product -- there is many a slip between the lab and loading dock. Second, even if it can be shown that a concept can be productized, it is virtually impossible to predict when the product will actually reach the market. Even products based on well understood production technologies can badly overrun schedules, sometimes bringing to grief those vendors and users who bet on new products.
The above arguments suggests that the next age of high performance computing could be based on anything from reliance on clusters with speed boosts add-ons, to a brave new computer based on technologies that may not have been heard of yet. (You can never go wrong with a forecast like that.) That said, I am willing to lay odds on purpose-built computers becoming a major component, if not the defining technology of the HPC market within the next five years, for two major reasons.
First, there is no "easy" technical solution. Single thread performance has plateaued; the usefulness of accelerators is dependent on both the parallelism inherent to the application and the connectivity between the accelerator and the rest of the system; and parallelism, while an advantage where it can be found, is not a panacea for computing performance.
Second, the economics of HPC system development have changed. Users cannot simply sit back and wait for a faster CPU, but must make significant investments in either new software, or new architectures, or both. Staying with old economic models will lead to the computation tools defining the science, where work will be restricted to those areas that will run well on off-the-shelf computers.
The HPC market is at a point where the business climate will support greater levels of innovation at the architectural level, which should lead to new organizing principle for HPC systems. The goal here is to find new approaches that will effectively combine and optimize the various standard components into systems that can continue to grow performance across a broad range of applications.
Of course we can always wait for a miracle to happen.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.