By By Mootaz Elnozahy

April 7, 2006


The High Productivity Computing System (HPCS) program offers IBM an exciting opportunity to target stretch goals and consider bold ideas, all within the realistic constraints of delivering innovation in a commercially viable product. For over four and a half years our team has been focused on applications of the high-end computing community and exploring new technologies that will introduce fundamental and exciting changes in the way we program and use high-end systems. IBM envisions a quantum leap in performance and productivity providing compelling differentiation to high-value systems compared to so-called commodity clusters, and making programming parallel systems an attractive end user experience.

The PERCS (Productive, Easy-to-Use, Reliable Computing System) team members come from IBM, Los Alamos National Laboratory and a dozen universities. During phases one and two of the HPCS program, the team collaborated to examine new ideas that span the entire computing stack, from basic technology to algorithms. The result is a concept system that will be based on IBM's mainstream POWER processor-based servers. We envision technologies resulting from the PERCS effort to start appearing in IBM systems as early as 2007 (in software), culminating into several peta-level systems by the middle of 2011 (assuming the IBM-HPCS relationship continues through Phase 3). The technology will also serve a broad range of configurations, especially those in small-scale deployments, and as a result we expect to reach more than just those interested only in the maximum configuration of the system.

This article summarizes our vision, our general approach, and the challenges of petascale computing. The competitive nature of the program and the impending down selection bid into phase 3 of HPCS means that we cannot reveal the details of our design or technologies, but we hope to provide a glimpse of what we are considering and to instill an appreciation of the issues involved in petascale systems.

A Vision for 2010

The following section is devoted to IBM's projections – a vision for 2010. IBM believes high productivity systems in 2010 will feature a rich programming environment with many tools that help in programming new applications, and maintaining existing ones. The programming environment will support existing programming models and languages, in addition to new emerging models designed for scalability to the peta-level. An application can be written using a mix of legacy and new programming languages, promoting code and asset reuse while availing programmers to the advantages of newer technologies. Open source communities will own many of the tools and parts the software stack. This will help ensure viability for the long term and promote a common look-and-feel and portability across different architectures. Tools will automate most or many of the performance tuning tasks and catch bugs statically, and newer programming models will be designed to prevent unnecessary bugs from happening in the first place (e.g. illegal memory referencing, deadlock, demonic access of shared variables, out of bound matrix indexing, etc.).

IBM expects the HPCS systems will be managed via rich graphical interfaces that automate many of the monitoring and recovery tasks, enabling fewer system administrators to handle larger systems more effectively. Through the WAN, users will potentially be able to share their data files and programs across the HPCS system and other clusters across the enterprise in a seamless manner. Open source operating systems and hypervisors will provide HPC-oriented virtualization, security, resource management, affinity control, resource limits, checkpoint-restart and reliability features that will improve the robustness and availability of the system.

The systems likely will feature balanced and flexible architectures that adapt to application needs. Innovations in the memory subsystem and inter-process communications will mitigate the effects of data access latency locally or across an Interconnect. The result will be better utilization of resources and an unprecedented performance leap along the four components of the HPC Challenge benchmark. Advanced packaging and water cooling will reduce system footprint with computing densities double today's best systems.


The HPCS program calls for a departure from the traditional myopic focus on performance to consider instead the broader value that a system provides, i.e. productivity. Time-to-solution, performance, portability and robustness are the dimensions of the productivity space according to the HPCS program vision. These dimensions may sometimes lead to conflicting requirements. For instance, the desire for portability imposes restrictions that can limit the range of innovations that one may consider for performance. Similarly, high-level abstractions that reduce the difficulty of programming entail a performance and resource overhead. Throughout the course of the program our team discarded some interesting ideas that could improve one aspect of the productivity proposition at the expense of the others. To address these tradeoffs, we used “real application” analysis and held countless discussions with the user community.

During phase 2 of the HPCS program, we conducted an extensive analysis of a number of “large” applications that were provided to the three HPCS contestants. These applications were particularly useful because they represent “real” workloads that stress a wide range of system parameters (e.g., CPU, caches, memory, network, storage), unlike benchmarks that narrowly focus on one system aspect or another. We used the analysis of the applications as a guide to evaluate innovations we proposed and also to develop new innovations from the insight we gained. The investigation revealed a wide range of often conflicting user requirements. For example, some users prefer the shared memory approach while others are heavily invested in message passing. Our approach depends on a high degree of configurability to address the disparity in requirements.

Our proposed system centers on an innovative processor chip design that is designed to leverage IBM's unique advantages in CMOS technology and the POWER processor server line. Advances in circuits and manufacturing processes can improve chip yield and lower the susceptibility to Soft Error Rates (SER), which are likely to flare up in future generations of CMOS technology. We also plan to leverage our unique processes to reduce the latency of memory accesses by placing the processors close to large memory arrays. The chip itself can be configured to build different system flavors, each aimed at a particular kind of workloads. Various interconnect options will be available, providing a tradeoff between cost and performance. Thus, IBM believes it will be possible to build systems optimized for commercial workloads, MPI applications, or HPC applications that depend on the shared memory abstraction such as in OpenMP or SHMEM. The wide range of configurations enables us to utilize the innovations in the mainstream product line and help ensure commercial viability of future HPCS systems. Furthermore, innovations in system packaging will allow us to provide computing densities that are orders of magnitude better than today's densest systems.

On the software side, our approach features a large set of tools integrated into a modern, user-friendly programming environment. The combined tool set and environment will support both legacy programming models and languages (MPI, OpenMP, C, C++, Fortran, etc.), and the emerging ones (PGAS, UPC, etc.). We have also worked with an experimental programming language, called X10, which we plan to subject to further research and prototyping in the first 18 months of phase three if our proposal is approved. After that period, a convergence of the research performed under the HPCS program should define a future, standard programming language and programming models that will benefit from the insights gained from X10 and other efforts.

X10 was designed for parallel processing from the ground up. It generally falls under the Partitioned Global Address Space (PGAS) category, and strikes a balance between providing a programmer-friendly, high-level abstraction and exposing the topology of the system to enable the programmer to control data placement and traffic. It features many innovations that will increase programmer productivity and system performance. Programmer productivity will benefit from high-level synchronization abstractions through the use of atomic sections, and the language helps ensure deadlock freedom under most circumstances. Proven productivity enhancers such as strong typing and automatic memory management relieve the programmer from many low-level details. X10 also will address performance issues through pervasive use of asynchronous interactions among the parallel threads, supported through the use of futures, and new concepts such as clocks. The language attempts to avoid the blocking synchronization style that affects system performance and limits program scalability (e.g., 2-way communication in MPI). X10 will require programmers to think differently about their programs, and early experiments have shown that the approach is promising and can improve time to solution compared to other alternatives.

There are many more aspects of the system design than we can cover here, but suffice it to say, our design addresses almost every aspect of the system with an integrated hardware-software approach. This aggressive vision entails a lot of risks, some of which we list to illustrate the difficulty of the task ahead.
Challenges Toward Petascale Computing

While today's high-end systems have issues concerning ease of use and programming, the sheer scale of the contemplated peta-level systems in 2010 brings new challenges that cannot be solved by following an evolutionary approach. Challenges include:

a.  Cost: Balanced peta-level systems that feature multi-petaops performance levels will require petabytes of memory and tens or even hundreds of petabytes of disk storage, commensurate with the expected performance and where components will be added not only for storage capacity but also for bandwidth. Projections of cost and power consumption for DRAM and disks translate to “interesting” procurement and operational costs. It is also important to note that these projections are a product of the industry and technology and are largely outside the control of the three HPCS vendors.

Our design factors and cost reduction across all components are an essential goal, and we evaluate innovations not only for their technical effectiveness but also for their cost efficacy. Consequently, our design eliminates or provides alternatives to components with poor cost/performance contributions, and increases the usage out of the most expensive system components.

b.  Quantifying Productivity: The HPCS vision seeks a tenfold improvement in productivity, an interesting challenge given that no one has a solid handle on quantifying productivity today. In PERCS, we introduced an economic model early on to express productivity, and worked throughout phase two of the program to develop metrics and measurement methodologies to quantify productivity in a pragmatic manner which reduces subjective assessment. We also developed various techniques, some of which promise to automate and evaluate systems and programmer productivities.

c.  Programming model: The Message Passing Interface (MPI) is arguably today's prevailing programming model in high-end systems. MPI uses messages for both synchronization and transfer of data, with a prevailing synchronous style of communications using the rendezvous abstraction. The community is aware of the inherent barriers that result from this synchronous nature of MPI and various forms of asynchronous or one-sided communications have been proposed. These, however, are not in much use today for a variety of reasons. As a result, there is a huge volume of legacy investment in software that will not scale easily to peta-level performance but must be protected, as it is unreasonable to expect the community to rewrite existing programs to enjoy the benefits of HPCS-class machines. Therefore, serious efforts and resources must be directed to improve the performance.

However, one must also recognize the need for new programming models that address the scalability problem at a fundamental level and meet the peta-level scalability challenge. Such new programming models must avoid unnecessary synchronization and enable new applications to be written without heroic efforts in programming. There are various risks with the undertaking of such an effort, including the technical challenge of realizing scalability and performance, users' resistance to change, and viability. These risks require a collaborative relationship among vendors and users with a substantial investment in design, implementation, and willingness to experiment with and adopt new models. This presents the HPCS program with an interesting dilemma in how to apportion the finite investment between improving legacy applications and introducing new programming models to enable new applications to scale up to the maximum potential of petascale systems.

Another interesting fact is that the widespread use of MPI has taught us that standardization of new programming models will be essential for success—the activity cannot be confined to just one or two “HPCS winners”. In PERCS, we commit to supporting existing programming models and introduce many features in hardware and software to improve their performance. We also introduce a new programming model (X10) that extends the Partitioned Global Address Space (PGAS) programming model with emphasis on scalable asynchronous interactions, and new features that simplify concurrency control, enhance scalability and improve the productivity of parallel programming. The new model, for example, reduces or eliminates the possibility of deadlocks for most cases and provides high-level synchronization based on atomic sections.

d.  Programming languages: FORTRAN, C and C++ are the prevailing languages for production code in today's high-end systems. These languages lack proven productivity enhancers common in modern programming languages such as type safety, automatic memory management and dynamic optimizations. The language is also decoupled from the programming model (e.g. MPI or OpenMP), presenting many problems that require laborious work on the part of the programmer. Modern programming languages, on the other hand, lack the deep support for high performance computing found in the rich libraries of mature languages. A new programming language that bridges this gap can be a great asset toward improving programmer's productivity. Again, such approach entails tremendous risk—there have been many such failed attempts in the past.
In PERCS, we have taken a pragmatic approach by recognizing that a new programming language must co-exist with existing languages, even within a single application. This will enable programmers to write an application using several languages, and leverages the existing mature ecosystems of legacy languages. This also will enable extending existing applications with code written in the new language. Toward these goals, we are experimenting with a new programming language, X10, which embodies ideas for programming models and features in a modern programming language platform with proven productivity enhancers. The new programming language is designed to call and be called from other languages using an innovative runtime system.

e.  Programming Environments and Tools: Programming high-end systems has unique requirements that are not currently satisfied by high-productivity programming environments successful in the commercial space. Furthermore, there is a dearth of tools to help programmers in performance tuning and other programming chores. Our preliminary experiments during phase two have shown that substantial improvement in programmer productivity can be obtained if a high-productivity programming environment and integrated tools are made available.

f.  Reliability: The Mean Time Between Failure (MTBF) of the system will be challenged due to sheer scale of the system. Furthermore, projections indicate an increase in Soft Error Rates (SER) in the target Silicon technology for building HPCS systems. In PERCS we innovate at the circuit and micro-architecture levels to harden the reliability of components and target a system-wide MTBF, which is equivalent to or exceeds that of the largest system today.

g.  Unprecedented Performance Leap: The HPCS performance targets require aggressive improvements in system parameters traditionally ignored by the “Linpack” benchmark. Realizing these performance levels will require adding unusual features that can improve the system performance under the most demanding benchmarks (e.g. GUPS), and it will be important to determine whether general applications will be written or modified to benefit from these features.

The above list represents some but not all the challenges that petascale systems will face. We did not address the system management, storage, visualization, interactions other systems, configuration, etc. A comprehensive list is necessarily long, but we hope that the magnitude of the challenge is evident.

The Path Ahead

IBM understands that the HPCS program represents a major commitment on the part of the U.S. government to energize high-end computing. It also represents an important step to encourage innovation and restore momentum to a segment of customers with needs not met by current vendor solutions.

The HPCS program requires that the participating vendors make a commitment to transition the technologies developed through HPCS into the mainstream product line. In other words, the program is not about creating a one-off-system. For vendors, the productization requirement represents a major commitment that responds positively to the government vision. IBM has committed to deliver on this challenge. Going forward, it will be important to develop a full relationship in phase three of the program. The user community must start developing applications now that will be able to harness the power of petascale systems, address grand challenge problems, and push the frontiers of science and technology. New algorithms that scale to these levels need to be developed, and the application expertise is the strength that the user community must bring to this program.

Indeed, no amount of ingenuity in the system architecture and design will rescue poorly written code or bad algorithms at the petascale! The HPC community must also approach new technology and innovation with an open mind, especially innovation that requires changes in the way we program or use the systems. These challenges can be met effectively in a relationship where vendors can better understand how to adjust their design to real application requirements, and where users get to better understand the competitive pressure and the technical and financial risks the vendors are undertaking.

In closing, this is the opportunity of a lifetime. At IBM, we are excited and ready to marshal our resources and second-to-none technical community to work in a partnership with the user community to realize the HPCS vision.


(c) IBM Corporation 2006

This publication was developed for products and/or services offered in the United States. IBM may not offer the products, features, or services discussed in this publication in other countries. The information may be subject to change without notice. Consult your local IBM business contact for information on the products, features and services available in your area.

All statements regarding IBM's future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only.

IBM and the IBM logo are trademarks or registered trademarks of International Business Machines Corporation in the United States or other countries or both. A full list of U.S. trademarks owned by IBM may be found at

Other company, product, and service names may be trademarks or service marks of others.

Information concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of the non-IBM products should be addressed with the suppliers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers


10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This