Visit additional Tabor Communication Publications
May 19, 2006
OpenMP* was designed to unify the directive languages of shared memory multiprocessors across the industry to make it easier to write portable parallel programs. OpenMP represents a high-level language of parallelism compared to programming with Posix threads or Windows threads. It also represents a much easier programming model than does MPI. This effort has been successful and OpenMP has gained many users around the world. Yet, thus far OpenMP has only been useful for programming systems with hardware shared memories. Now, Intel is offering Cluster OpenMP*, which extends the OpenMP programming model to clusters. This article describes the extension to OpenMP that makes this possible, how the system manages to simulate a shared memory across a cluster, how the user ports an OpenMP program to Cluster OpenMP, and ends with a discussion of the amount of effort required to port a code.
In the 1980s and 90s, multiprocessor computer manufacturers tried to address the difficulties of programming their computers by supplying directives that could be placed in serial programs that would instruct the compiler to produce parallel code. Each manufacturer produced its own unique set of directives. As programmers moved their programs from machine to machine, they found that they had to recode the directives. To remedy this situation, major players in the industry formed a working group in the mid-90s to unify the directives. The result was the 1997 OpenMP specification for Fortran and the 1998 specification for C/C++. This effort has been successful, and the OpenMP directive language has been adopted across the industry.
The OpenMP paradigm for parallel programming differs significantly from the earlier message passing solutions (such as PVM and MPI), and from explicit threading (Posix* threads or Windows* threads). The first difference that people notice is that OpenMP consists mostly of directives, whereas PVM, MPI and the threading methods consist solely of library routines. The effect of this is that OpenMP's effects can be switched off by a compiler switch, removing the OpenMP parallelism, whereas programs using library-based parallelism are permanently changed into parallel programs.
Another obvious difference versus message passing is that data movement for message passing programs must be explicitly programmed by the programmer, while data movement in OpenMP programs happens automatically when threads read and write variables. This means that in addition to the code for the problem being solved, the message passing programmer must write a program layer to move the data between processors.
Both of these differences translate directly into lower programming costs and lower maintenance costs for OpenMP programs. With OpenMP, you program "what" to do, while with message passing and explicit threading you program "how" to do it.
In this sense, OpenMP could be said to be a high-level language of parallel programming, while explicit threading and message-passing programming is more akin to an "assembly language" of parallel programming.
A drawback to OpenMP is that it requires a shared memory, so up-to-now this has limited its use to a single multiprocessor machine. Now, Intel's Cluster OpenMP removes that limitation. Cluster OpenMP makes it possible to run an OpenMP program across a cluster of multiprocessors. The shared memory is simulated by a software layer implementing a distributed shared memory (DSM).
Cluster OpenMP Extension to OpenMP
Cluster OpenMP extends OpenMP by a single directive: the sharable directive. This directive indicates that the named variables are to have the "sharable" attribute. That is, their values are kept consistent between threads in the program. All variables that are shared within an OpenMP parallel region, either by virtue of appearing in a shared clause, or by being shared through the default rules of OpenMP must be made sharable in Cluster OpenMP.
Under some circumstances the compiler can automatically determine that variables must be sharable and can make them sharable automatically. Other variables must be made sharable by some action of the programmer, either by using compiler options, or by the programmer explicitly placing sharable directives in the code.
How it Works
Cluster OpenMP is based upon an exclusively licensed version of the TreadMarks* DSM system (originating at Rice University), specially enhanced to handle larger amounts of sharable data, larger numbers of processors, multiple threads per process, and to run on modern cluster interconnects.
The basic idea of the consistency mechanism is that the Linux mprotect() mechanism is used to protect pages containing sharable data, whenever the page is not fully up-to-date on a given process. All sharable data is placed together on sharable pages. When the program references any data on a protected page, a segmentation fault signal is delivered to the program, which is caught by the Cluster OpenMP runtime library. The library then sends requests for changes to all of the processes that have modified that page. When those changes arrive, they are applied to the page, the page protection is removed, and the instruction is restarted. The data access will succeed this time, yielding the correct value of the variable, and the program will proceed.
Obviously, any memory access that triggers the consistency mechanism is much more expensive than an ordinary access to a processor's memory. In fact, a memory access requiring the consistency mechanism can be hundreds to thousands of times slower than access to any level of cache or hardware memory. So, how can we expect any program to get reasonable performance with such expensive memory operations? There are a number of ways that programs can get good performance despite such a heavy memory access cost.
First, the relaxed memory consistency model of OpenMP makes it possible to hide the latency of memory operations. According to the OpenMP memory model, memory seen by multiple threads in an OpenMP program need not be made consistent except at the points where threads synchronize. So, in an OpenMP program, the consistency mechanism is only necessary at barriers and around the use of locks. This allows other memory accesses to be significantly overlapped.
Second, a single segmentation fault on a page causes the whole page to be brought up-to-date. This tends to amortize the accesses to that page. If the program is written in a style that maximizes memory locality, then the data on a page that is brought up-to-date might be accessed a large number of times at memory speeds before it needs to be refreshed.
Third, whole classes of applications exist that, by their very nature are well-suited to Cluster OpenMP. Any application that accesses a large amount of read-only sharable data and only a small amount of read/write sharable data, and uses synchronization sparingly, has the potential for good performance. This is true because for most of the sharable data, the application only pays one large latency penalty per page, and thereafter accesses the read-only data at memory (or cache) speeds. In fact, we have seen many examples of programs that have these characteristics and provide good performance with Cluster OpenMP. Examples are applications doing rendering, data-mining, all kinds of parallel search, speech and visual recognition, and genetic sequencing.
Porting a Code to Cluster OpenMP
Cluster OpenMP is an additional-cost component of the Intel 9.1 compiler. A special option in the compiler (-cluster-openmp) causes the compiler to generate code for Cluster OpenMP. Since the user must potentially add sharable directives to the code, a pure OpenMP code will not necessarily run as-is with the -cluster-openmp option. Typically a porting step must be done on an OpenMP code before all variables are made sharable that need to be made sharable.
First, the programmer should simply try the -cluster-openmp option and check whether the program runs. It may run because, as stated above, the compiler can automatically make some variables sharable.
If it does not run, the user should re-build the application with the "-clomp-sharable-propagation -ipo" options. This triggers an interprocedural compilation that attempts to track down the allocation point of each variable used in a shared way inside a parallel region. The compiler will report the situations it finds where a variable needs to be made sharable, and list the source file and line number information, allowing the user to make it sharable.
If -clomp-sharable-propagation does not find all sharable variables, then a run-time tool is available that can find more such variables. The user would compile with "-g", set the environment variable KMP_DISJOINT_HEAPSIZE with a size large enough for each process' private heap, then run the program. At runtime, the program will report instances of uses of a dynamically allocated variable that needs to be made sharable.
If these two techniques don't yet find all sharable variables, then the final task depends on the language being used. For Fortran, there are special options that make all COMMON variables, MODULE variables, and SAVE variables sharable by default. That is almost always sufficient to complete the port of a Fortran program. For C/C++, the programmer should inspect any routines being called within a parallel region (for instance, method calls in C++), and determine whether any static data is being used in them. If so, such data needs to be made sharable.
The amount of effort required to port an OpenMP application to Cluster OpenMP is entirely dependent on the application itself. Applications that make extensive use of pointers are more difficult because it is more difficult to determine where the actual data being used in parallel regions ultimately resides. In any case, identifying the sharable variables in a program is a useful exercise because it illuminates data access characteristics that are often hidden because of the procedural focus of OpenMP. In this sense, identifying sharable data adds a data focus that is a critical step in performance tuning for OpenMP programs.
The table below shows the number of files and lines that had to be modified to insert sharable directives in a set of OpenMP programs written in Fortran, C, and C++, to port them to Cluster OpenMP. Fortran programs are usually easier to port than C or C++ programs because the compiler options that make Fortran global variables sharable are sometimes enough to port a Fortran program. Notice that the largest code of the group in the table, fma3d (60,000 lines of Fortran) required no modifications at all.
Overall, the table shows that about 2 percent of original source lines needed to be modified. For Fortran, only 1.5 percent of source lines needed modification, while for C and C++ about 3.5 percent of source lines needed modification.
When compared with the drastic modification needed to convert an OpenMP program to a message passing code or an explicitly-threaded code, the effort to port an OpenMP program to Cluster OpenMP form is very small.
--------------------------------------------------------------------- - Original Modified - ------------- -------------------- - % Lines - Code Appl area Lang Files Lines Files Lines mod ------------- ---------- ------- ----- ----- ----- ----- -------- AMR renderer Graphics C++ 145 35100 12 36 0.102564 ------------- ---------- ------- ----- ----- ----- ----- -------- 332.ammp Chemistry/ C 31 13500 5 29 0.214815 Biology ------------- ---------- ------- ----- ----- ----- ----- -------- 316.applu Fluid Fortran 20 4000 0 0 0 dynamics ------------- ---------- ------- ----- ----- ----- ----- -------- 324.apsi Air Fortran 1 7500 1 101 1.346667 pollution ------------- ---------- ------- ----- ----- ----- ----- -------- 330.art Image C 1 1300 1 138 10.61538 recognition ------------- ---------- ------- ----- ----- ----- ----- -------- 328.fma3d Crash Fortran 101 60000 0 0 0 simulation ------------- ---------- ------- ----- ----- ----- ----- -------- 326.gafort Genetic Fortran 1 1500 1 2 0.133333 algorithm ------------- ---------- ------- ----- ----- ----- ----- -------- 318.galgel Fluid Fortran 39 15300 8 74 0.48366 dynamics ------------- ---------- ------- ----- ----- ----- ----- -------- 320.equake Earthquake C 1 1500 1 44 2.933333 modeling ------------- ---------- ------- ----- ----- ----- ----- -------- 314.mgrid Multigrid Fortran 1 500 1 3 0.6 solver ------------- ---------- ------- ----- ----- ----- ----- -------- 312.swim Shallow Fortran 1 400 1 24 6 water ------------- ---------- ------- ----- ----- ----- ----- -------- 310.wupwise QCD Fortran 25 2200 12 78 3.545455 ------------- ---------- ------- ----- ----- ----- ----- ------- - avg 2.164601 - ------ -------- - avg 1.513639 - Fortran - ------ -------- - avg 3.466524 - C/C++ -----------------------------------------------------------------------
Cluster OpenMP is currently the only commercial system available to extend OpenMP programs to run on clusters. The old thinking was that there were two ways to make use of large numbers of processors with a program: buy a large shared memory multiprocessor and use OpenMP, or buy a cluster and use message passing. The first requires expensive hardware, while the second requires expensive program development and maintenance. Now, Cluster OpenMP offers a new option: buy a cluster and use OpenMP. This offers the best of both worlds: a less expensive hardware purchase and a less expensive programming option.
* Other names and brands may be claimed as the property of others.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.