Visit additional Tabor Communication Publications
December 04, 2008
Planning to Scale in a World with Less "Embarrassing Parallelism"
The need to write scalable applications has been important for programmers in the HPC community for years. Now, the proliferation of multi-core processors is making scalability a top priority for millions of programmers. Previously, HPC programs that scaled very well were called "embarrassingly parallel," but it is inevitable that we will increasingly settle for "good enough" parallelism. Any scaling, and a design to keep scaling, will be of paramount importance. Efficient designs will be relatively unimportant if a program cannot scale into the future. "Forward scaling" will become a key part of good program design. Less "embarrassing" parallelism will become the norm for most programmers.
The move to multi-core processors means that all programmers will need to incorporate new techniques to write scalable applications and adopt a new mindset to achieve further gains in application performance. In the past, a poorly written sequential program would still generally speed up as processor clock rates increased. However, we are now faced with the reality that a poorly written concurrent program will generally not speed up as more processor cores become available. Programmers today need to "think parallel" and write parallel programs to harvest the potential of multiple processor cores.
Thinking and writing parallel are not easy feats. As computer science professor Andrew S. Tanenbaum recently observed at the USENIX '08 conference, "Sequential programming is really hard, and parallel programming is a step beyond that." Obviously, we would prefer to not do these difficult tasks over and over again as future-generation architectures are released. Forward scaling offers a means of avoiding much of that repetitive work.
Free lunch, not free beer
As the writer and programmer Herb Sutter announced a few years ago, "The free lunch is over." Programmers can no longer count on the acceleration of their applications along with a continuous rise in clock rates. Is there a new free lunch on the horizon? Clearly programmers want a way to maximize the long-term returns for the time and effort they invest today.
Some have suggested that scaling is the new free lunch. Scaling provides a way to avoid rewriting code and redoing difficult tasks with the release of each new architecture. When our applications are successful at scaling, our software will run well on today's dual- and quad-core processors while also capitalizing on tomorrow's many-core architectures.
Is scaling the new free lunch? It's probably not free in the sense of "free beer" (as Richard Stallman might say). Scaling takes some work. The goal is to make investments today that will continue to pay off in the future.
Forward scaling defined
Forward scaling is an approach to designing software that will deliver outstanding performance on today's multi-core processors and will scale that performance on tomorrow's many-core architectures. The objective of forward scaling is to find techniques that will help avoid a complete code rewrite with the addition of more processor cores. Is this possible? In many cases, it appears so. There is no perfect solution but the options available today can assist a great deal.
Forward scaling is different than scaling because the focus is a design that will scale in the future, not today. In time, tools improve, processor designs improve, and the amount of data to process grows. As we make modest code changes with forward scaling in mind, we must design with a path for these three factors so that our programs scale in the future. To do this, we must have some notions of how tools, processors, and our data might change.
As we begin to design to scale forward, remember that we do not need to deliver scaling for a hundred cores today. We just need a path to get there tomorrow while preserving most of the investment in the software. We've been trying to design software like this for years—software that is ready for the systems of tomorrow. Now the topic of scaling becomes more important.
Designing for forward scaling
The best program designs anticipate and prepare for the future better than poorly designed programs. Future many-core processors can be anticipated by programs with forward scaling in mind.
So, how can we scale our program forward?
First, we must reject programming methods that will clearly fail to produce forward scaling. Avoid using native threads such as Pthreads, Windows threads, Boost threads, and Java threads. In general, code written using native threads simply will not hold up over time. With native threads, too many assumptions become coded into lower-level programming.
Instead, we will adopt standards-based tools and methods that expose sufficient parallelism and enable us to find opportunities to process more data. Forward scaling is made simpler by using standards-based tools that offer abstractions for parallelism. Tools such as OpenMP, threaded libraries, Intel® Threading Building Blocks (TBB), or Intel® MPI libraries can help ensure that the techniques employed today will still be valid in the future.
Libraries will play a key role in forward scaling, and there will no doubt be great demand for well-produced libraries as programmers begin to scale forward. Of course, library interfaces will need to evolve to maximize the opportunity afforded by libraries. Many developers are already simplifying their programming and supporting larger numbers of cores by using the Intel® Math Kernel Library (Intel® MKL). Intel will continue to tune the library as our processors evolve to help provide forward scaling for years to come. Intel® MKL offers scaling to thousands of processors today; in the future, more libraries will be able to do the same.
To realize the benefits of forward scaling, software developers and their customers must choose the right hardware. At Intel, we are working to create systems that balance processing power, memory capacity, and I/O bandwidth. Balanced systems are essential for realizing application performance gains as the number of cores increases. We are also facilitating better communication among processor cores, and between the processor and memory. Our work on the Intel® QuickPath interconnect, supported with the "Nehalem" architecture, promotes scalability by greatly reducing competition for bus bandwidth.
Forward scaling breaks down if we need to customize each generation of our programs to wildly different imbalances in a system design. Of course, no one will offer perfect balance, but Intel understands that balance is very important to preserve investments in software.
Forward scaling is not "embarrassing"
Some applications scale very well, whether we're using a few dozen or a few thousand cores. Simply by running those applications on large-scale systems, we can achieve remarkable gains in performance. These applications have been called "embarrassingly parallel" because of the relative ease with which the applications exploited parallelism.
In the past, achieving high efficiency with these applications was critical because they were run on expensive hardware. But achieving that high efficiency came at a significant cost in programmer productivity. The future, it would seem, will have lower-cost hardware and higher-cost programmers. It might seem as if we can be more relaxed about program scalability, but programs still need to be scalable. Creating parallel programs that scale in the future to use larger core counts is essential for improving performance on future architectures.
Focusing on scalability is generally a much better use of programmer resources than focusing on enhancing application efficiency. The proliferation of cost-effective multi-core processors is lowering the cost of computing and reducing the customer demand for applications that deliver extreme efficiency. If our program is scalable, achieving 50 percent efficiency will be sufficient if we can count on that same level of efficiency with the next architecture. Squeezing efficiency out of a program will be relatively meaningless if scaling is not achieved.
Once we create a scalable program, we're back on the free lunch bandwagon. The introduction of each new generation of architecture will help boost the performance of the application. We will be able to focus on adding new features to the application rather than on rewriting code just to keep up with changes.
Poor programming practices need not apply
A well-written program today needs to be scalable. Good choices will let a program scale forward, but bad choices will be poor investments.
Will the increased focus on concurrency for multi-core systems give us more choices for HPC, or will MPI continue to be the programming method of choice for scalable programs? If HPC programmers continue to rely on MPI, will non-HPC programmers turn to MPI as well? In the near term, MPI will continue to dominate the most scalable programs. But over the longer term, perhaps anything is possible.
For now, scaling is clearly a new and important topic for most programmers. Choosing solutions with better forward scaling options will be essential for protecting our investments.
Intel software offers choices to help with forward scaling:
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.