Visit additional Tabor Communication Publications
January 14, 2009
As a software developer, you are faced with a range of options as you decide whether and how to modify your applications for parallel architectures. What approach should you adopt? How significantly should you alter applications? When should you say “no” to parallelism? Answering these questions requires not only technical expertise but also strategic thinking that evaluates the business benefits and costs. As you weigh your options, consider the suggestions below, which we’ve derived from our experiences at Intel working with software developers to optimize code.
Don't Parallelize Un-Optimized Serial Code
There is no doubt that parallelization is an important means of upgrading application performance for current and future generations of hardware. Still, threading should not be the first course of action in optimizing an application. Identify whether you can meet your performance targets using serial and vector optimizations. In some cases, you might gain greater performance improvements by optimizing the serial version than by creating a parallel one.
Don't Parallelize If the Serial Code Is Running Fast Enough Already
Although you can and should position your code for future architectures, it might not be time- and cost-effective to work to meet requirements that have not yet been created. Developing code with a higher degree of parallelism than current architectures can support is rarely the best use of resources. And of course, if your code is I/O- or memory-bound, parallelizing the code will not help.
Don't spend time trying to parallelize all your past work immediately. Think parallel when building new code or rebuilding sections of existing code. Rather than solving problems in sequential steps, consider how those problems can be broken into separate pieces that can be performed simultaneously.
As you consider how much parallelism is needed for the longer term, try to determine how quickly the application workload will scale and how that scaling will affect computational requirements. The amount of computation might rise linearly with the size of the data set, or it might rise at a geometrically faster or slower rate. (If the workload scales linearly with the data, beware that the code will run into an I/O bottleneck at some point.)
Take for example an audio processing application that can already handle a sufficient number of channels at a good sampling rate. This application might be a poor candidate for parallelization. On the other hand, a physical modeling application might be a good candidate. With higher computational power available, the modeling application could deliver finer modeling and more accurate algorithms.
Overall, make sure the gains from parallelizing an application are not offset by delays in shipping the next version of your product.
Don't Parallelize by Rewriting Code from Scratch
Don't throw the baby out with the bathwater. Resist the temptation to discard an entire working code base because it is ugly, convoluted, or just plain old. The story of Netscape provides a cautionary tale. According to software developer and blogger Joel Spolsky, Netscape rewrote the entire code base of its browser between 1997 and 2000. While the company shipped no new versions, Microsoft took over the browser market, and Netscape never recovered.
As Spolsky notes, most programmers strive to write more elegant code. Through the development process, clumps of code are added and refined. The result might appear ugly. But if you rewrite the code, you lose the knowledge that those clumps represent. And in any case, ugliness is in the eye of the beholder. Computer code that appears ugly to a human trying to read it might work just fine.
There will always be some occasions that demand a fresh start. Still, ripping up a class or subroutine to introduce parallelism can be much more efficient than starting with a completely blank page.
Don't Parallelize If Someone Has Already Done the Work for You
Before you tear into any of your code and try to parallelize it yourself, see if off-the-shelf or open-source solutions can provide what you need. If you are working on scientific or technical applications, for example, you might find standard routines in popular math libraries that can save you valuable time.
Consider using Intel® Software Tools. For example, Intel® Threading Building Blocks, a C++ template-based runtime library of parallel structures and algorithms, can dramatically simplify your efforts. Using these constructs enables your code to scale automatically as the number of execution cores increases. The constructs are also designed explicitly to be compatible with other threading techniques, including other parallel libraries, such as the Intel® Integrated Performance Primitives and Intel® Math Kernel Library. Using compilers or OpenMP* offers additional ways to add parallelism easily.
Don't Rush In
The advent of multi-core architectures constitutes a revolution in computer technology, but capitalizing on the benefits of multi-core demands measured steps. Multi-threading and other means of parallelizing software are complex undertakings that can require substantial resources. Parallelization should be regarded as an investment like any other -- one that should be undertaken when necessary, but only after due consideration of costs and benefits.
Visit Intel online for the latest news and more information about High Performance Computing and Parallel Programming for Intel Multi-Core Processors.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.