By By Mootaz Elnozahy

April 7, 2006


The High Productivity Computing System (HPCS) program offers IBM an exciting opportunity to target stretch goals and consider bold ideas, all within the realistic constraints of delivering innovation in a commercially viable product. For over four and a half years our team has been focused on applications of the high-end computing community and exploring new technologies that will introduce fundamental and exciting changes in the way we program and use high-end systems. IBM envisions a quantum leap in performance and productivity providing compelling differentiation to high-value systems compared to so-called commodity clusters, and making programming parallel systems an attractive end user experience.

The PERCS (Productive, Easy-to-Use, Reliable Computing System) team members come from IBM, Los Alamos National Laboratory and a dozen universities. During phases one and two of the HPCS program, the team collaborated to examine new ideas that span the entire computing stack, from basic technology to algorithms. The result is a concept system that will be based on IBM's mainstream POWER processor-based servers. We envision technologies resulting from the PERCS effort to start appearing in IBM systems as early as 2007 (in software), culminating into several peta-level systems by the middle of 2011 (assuming the IBM-HPCS relationship continues through Phase 3). The technology will also serve a broad range of configurations, especially those in small-scale deployments, and as a result we expect to reach more than just those interested only in the maximum configuration of the system.

This article summarizes our vision, our general approach, and the challenges of petascale computing. The competitive nature of the program and the impending down selection bid into phase 3 of HPCS means that we cannot reveal the details of our design or technologies, but we hope to provide a glimpse of what we are considering and to instill an appreciation of the issues involved in petascale systems.

A Vision for 2010

The following section is devoted to IBM's projections – a vision for 2010. IBM believes high productivity systems in 2010 will feature a rich programming environment with many tools that help in programming new applications, and maintaining existing ones. The programming environment will support existing programming models and languages, in addition to new emerging models designed for scalability to the peta-level. An application can be written using a mix of legacy and new programming languages, promoting code and asset reuse while availing programmers to the advantages of newer technologies. Open source communities will own many of the tools and parts the software stack. This will help ensure viability for the long term and promote a common look-and-feel and portability across different architectures. Tools will automate most or many of the performance tuning tasks and catch bugs statically, and newer programming models will be designed to prevent unnecessary bugs from happening in the first place (e.g. illegal memory referencing, deadlock, demonic access of shared variables, out of bound matrix indexing, etc.).

IBM expects the HPCS systems will be managed via rich graphical interfaces that automate many of the monitoring and recovery tasks, enabling fewer system administrators to handle larger systems more effectively. Through the WAN, users will potentially be able to share their data files and programs across the HPCS system and other clusters across the enterprise in a seamless manner. Open source operating systems and hypervisors will provide HPC-oriented virtualization, security, resource management, affinity control, resource limits, checkpoint-restart and reliability features that will improve the robustness and availability of the system.

The systems likely will feature balanced and flexible architectures that adapt to application needs. Innovations in the memory subsystem and inter-process communications will mitigate the effects of data access latency locally or across an Interconnect. The result will be better utilization of resources and an unprecedented performance leap along the four components of the HPC Challenge benchmark. Advanced packaging and water cooling will reduce system footprint with computing densities double today's best systems.


The HPCS program calls for a departure from the traditional myopic focus on performance to consider instead the broader value that a system provides, i.e. productivity. Time-to-solution, performance, portability and robustness are the dimensions of the productivity space according to the HPCS program vision. These dimensions may sometimes lead to conflicting requirements. For instance, the desire for portability imposes restrictions that can limit the range of innovations that one may consider for performance. Similarly, high-level abstractions that reduce the difficulty of programming entail a performance and resource overhead. Throughout the course of the program our team discarded some interesting ideas that could improve one aspect of the productivity proposition at the expense of the others. To address these tradeoffs, we used “real application” analysis and held countless discussions with the user community.

During phase 2 of the HPCS program, we conducted an extensive analysis of a number of “large” applications that were provided to the three HPCS contestants. These applications were particularly useful because they represent “real” workloads that stress a wide range of system parameters (e.g., CPU, caches, memory, network, storage), unlike benchmarks that narrowly focus on one system aspect or another. We used the analysis of the applications as a guide to evaluate innovations we proposed and also to develop new innovations from the insight we gained. The investigation revealed a wide range of often conflicting user requirements. For example, some users prefer the shared memory approach while others are heavily invested in message passing. Our approach depends on a high degree of configurability to address the disparity in requirements.

Our proposed system centers on an innovative processor chip design that is designed to leverage IBM's unique advantages in CMOS technology and the POWER processor server line. Advances in circuits and manufacturing processes can improve chip yield and lower the susceptibility to Soft Error Rates (SER), which are likely to flare up in future generations of CMOS technology. We also plan to leverage our unique processes to reduce the latency of memory accesses by placing the processors close to large memory arrays. The chip itself can be configured to build different system flavors, each aimed at a particular kind of workloads. Various interconnect options will be available, providing a tradeoff between cost and performance. Thus, IBM believes it will be possible to build systems optimized for commercial workloads, MPI applications, or HPC applications that depend on the shared memory abstraction such as in OpenMP or SHMEM. The wide range of configurations enables us to utilize the innovations in the mainstream product line and help ensure commercial viability of future HPCS systems. Furthermore, innovations in system packaging will allow us to provide computing densities that are orders of magnitude better than today's densest systems.

On the software side, our approach features a large set of tools integrated into a modern, user-friendly programming environment. The combined tool set and environment will support both legacy programming models and languages (MPI, OpenMP, C, C++, Fortran, etc.), and the emerging ones (PGAS, UPC, etc.). We have also worked with an experimental programming language, called X10, which we plan to subject to further research and prototyping in the first 18 months of phase three if our proposal is approved. After that period, a convergence of the research performed under the HPCS program should define a future, standard programming language and programming models that will benefit from the insights gained from X10 and other efforts.

X10 was designed for parallel processing from the ground up. It generally falls under the Partitioned Global Address Space (PGAS) category, and strikes a balance between providing a programmer-friendly, high-level abstraction and exposing the topology of the system to enable the programmer to control data placement and traffic. It features many innovations that will increase programmer productivity and system performance. Programmer productivity will benefit from high-level synchronization abstractions through the use of atomic sections, and the language helps ensure deadlock freedom under most circumstances. Proven productivity enhancers such as strong typing and automatic memory management relieve the programmer from many low-level details. X10 also will address performance issues through pervasive use of asynchronous interactions among the parallel threads, supported through the use of futures, and new concepts such as clocks. The language attempts to avoid the blocking synchronization style that affects system performance and limits program scalability (e.g., 2-way communication in MPI). X10 will require programmers to think differently about their programs, and early experiments have shown that the approach is promising and can improve time to solution compared to other alternatives.

There are many more aspects of the system design than we can cover here, but suffice it to say, our design addresses almost every aspect of the system with an integrated hardware-software approach. This aggressive vision entails a lot of risks, some of which we list to illustrate the difficulty of the task ahead.
Challenges Toward Petascale Computing

While today's high-end systems have issues concerning ease of use and programming, the sheer scale of the contemplated peta-level systems in 2010 brings new challenges that cannot be solved by following an evolutionary approach. Challenges include:

a.  Cost: Balanced peta-level systems that feature multi-petaops performance levels will require petabytes of memory and tens or even hundreds of petabytes of disk storage, commensurate with the expected performance and where components will be added not only for storage capacity but also for bandwidth. Projections of cost and power consumption for DRAM and disks translate to “interesting” procurement and operational costs. It is also important to note that these projections are a product of the industry and technology and are largely outside the control of the three HPCS vendors.

Our design factors and cost reduction across all components are an essential goal, and we evaluate innovations not only for their technical effectiveness but also for their cost efficacy. Consequently, our design eliminates or provides alternatives to components with poor cost/performance contributions, and increases the usage out of the most expensive system components.

b.  Quantifying Productivity: The HPCS vision seeks a tenfold improvement in productivity, an interesting challenge given that no one has a solid handle on quantifying productivity today. In PERCS, we introduced an economic model early on to express productivity, and worked throughout phase two of the program to develop metrics and measurement methodologies to quantify productivity in a pragmatic manner which reduces subjective assessment. We also developed various techniques, some of which promise to automate and evaluate systems and programmer productivities.

c.  Programming model: The Message Passing Interface (MPI) is arguably today's prevailing programming model in high-end systems. MPI uses messages for both synchronization and transfer of data, with a prevailing synchronous style of communications using the rendezvous abstraction. The community is aware of the inherent barriers that result from this synchronous nature of MPI and various forms of asynchronous or one-sided communications have been proposed. These, however, are not in much use today for a variety of reasons. As a result, there is a huge volume of legacy investment in software that will not scale easily to peta-level performance but must be protected, as it is unreasonable to expect the community to rewrite existing programs to enjoy the benefits of HPCS-class machines. Therefore, serious efforts and resources must be directed to improve the performance.

However, one must also recognize the need for new programming models that address the scalability problem at a fundamental level and meet the peta-level scalability challenge. Such new programming models must avoid unnecessary synchronization and enable new applications to be written without heroic efforts in programming. There are various risks with the undertaking of such an effort, including the technical challenge of realizing scalability and performance, users' resistance to change, and viability. These risks require a collaborative relationship among vendors and users with a substantial investment in design, implementation, and willingness to experiment with and adopt new models. This presents the HPCS program with an interesting dilemma in how to apportion the finite investment between improving legacy applications and introducing new programming models to enable new applications to scale up to the maximum potential of petascale systems.

Another interesting fact is that the widespread use of MPI has taught us that standardization of new programming models will be essential for success—the activity cannot be confined to just one or two “HPCS winners”. In PERCS, we commit to supporting existing programming models and introduce many features in hardware and software to improve their performance. We also introduce a new programming model (X10) that extends the Partitioned Global Address Space (PGAS) programming model with emphasis on scalable asynchronous interactions, and new features that simplify concurrency control, enhance scalability and improve the productivity of parallel programming. The new model, for example, reduces or eliminates the possibility of deadlocks for most cases and provides high-level synchronization based on atomic sections.

d.  Programming languages: FORTRAN, C and C++ are the prevailing languages for production code in today's high-end systems. These languages lack proven productivity enhancers common in modern programming languages such as type safety, automatic memory management and dynamic optimizations. The language is also decoupled from the programming model (e.g. MPI or OpenMP), presenting many problems that require laborious work on the part of the programmer. Modern programming languages, on the other hand, lack the deep support for high performance computing found in the rich libraries of mature languages. A new programming language that bridges this gap can be a great asset toward improving programmer's productivity. Again, such approach entails tremendous risk—there have been many such failed attempts in the past.
In PERCS, we have taken a pragmatic approach by recognizing that a new programming language must co-exist with existing languages, even within a single application. This will enable programmers to write an application using several languages, and leverages the existing mature ecosystems of legacy languages. This also will enable extending existing applications with code written in the new language. Toward these goals, we are experimenting with a new programming language, X10, which embodies ideas for programming models and features in a modern programming language platform with proven productivity enhancers. The new programming language is designed to call and be called from other languages using an innovative runtime system.

e.  Programming Environments and Tools: Programming high-end systems has unique requirements that are not currently satisfied by high-productivity programming environments successful in the commercial space. Furthermore, there is a dearth of tools to help programmers in performance tuning and other programming chores. Our preliminary experiments during phase two have shown that substantial improvement in programmer productivity can be obtained if a high-productivity programming environment and integrated tools are made available.

f.  Reliability: The Mean Time Between Failure (MTBF) of the system will be challenged due to sheer scale of the system. Furthermore, projections indicate an increase in Soft Error Rates (SER) in the target Silicon technology for building HPCS systems. In PERCS we innovate at the circuit and micro-architecture levels to harden the reliability of components and target a system-wide MTBF, which is equivalent to or exceeds that of the largest system today.

g.  Unprecedented Performance Leap: The HPCS performance targets require aggressive improvements in system parameters traditionally ignored by the “Linpack” benchmark. Realizing these performance levels will require adding unusual features that can improve the system performance under the most demanding benchmarks (e.g. GUPS), and it will be important to determine whether general applications will be written or modified to benefit from these features.

The above list represents some but not all the challenges that petascale systems will face. We did not address the system management, storage, visualization, interactions other systems, configuration, etc. A comprehensive list is necessarily long, but we hope that the magnitude of the challenge is evident.

The Path Ahead

IBM understands that the HPCS program represents a major commitment on the part of the U.S. government to energize high-end computing. It also represents an important step to encourage innovation and restore momentum to a segment of customers with needs not met by current vendor solutions.

The HPCS program requires that the participating vendors make a commitment to transition the technologies developed through HPCS into the mainstream product line. In other words, the program is not about creating a one-off-system. For vendors, the productization requirement represents a major commitment that responds positively to the government vision. IBM has committed to deliver on this challenge. Going forward, it will be important to develop a full relationship in phase three of the program. The user community must start developing applications now that will be able to harness the power of petascale systems, address grand challenge problems, and push the frontiers of science and technology. New algorithms that scale to these levels need to be developed, and the application expertise is the strength that the user community must bring to this program.

Indeed, no amount of ingenuity in the system architecture and design will rescue poorly written code or bad algorithms at the petascale! The HPC community must also approach new technology and innovation with an open mind, especially innovation that requires changes in the way we program or use the systems. These challenges can be met effectively in a relationship where vendors can better understand how to adjust their design to real application requirements, and where users get to better understand the competitive pressure and the technical and financial risks the vendors are undertaking.

In closing, this is the opportunity of a lifetime. At IBM, we are excited and ready to marshal our resources and second-to-none technical community to work in a partnership with the user community to realize the HPCS vision.


(c) IBM Corporation 2006

This publication was developed for products and/or services offered in the United States. IBM may not offer the products, features, or services discussed in this publication in other countries. The information may be subject to change without notice. Consult your local IBM business contact for information on the products, features and services available in your area.

All statements regarding IBM's future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only.

IBM and the IBM logo are trademarks or registered trademarks of International Business Machines Corporation in the United States or other countries or both. A full list of U.S. trademarks owned by IBM may be found at

Other company, product, and service names may be trademarks or service marks of others.

Information concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of the non-IBM products should be addressed with the suppliers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts. In the f Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

NSF Launches Quantum Computing Faculty Fellows Program

October 22, 2018

Efforts to expand quantum computing research capacity continue to accelerate. The National Science Foundation today announced a Quantum Computing & Information Science Faculty Fellows (QCIS-FF) program aimed at devel Read more…

By John Russell

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Join IBM at SC18 and Learn to Harness the Next Generation of AI-focused Supercomputing

Blurring the lines between HPC and AI

Today’s high performance computers are helping clients gain insights at an unprecedented pace. The intersection of artificial intelligence (AI) and HPC can transform industries while solving some of the world’s toughest challenges. Read more…

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain  Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist


Dell EMC





TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This