Visit additional Tabor Communication Publications
May 19, 2006
One of the biggest impediments to HPC application development today is the difficulty of writing software for cluster architectures. Unlike clusters, shared memory machines provide a globally accessible memory space, offering a more programmer-friendly environment for doing parallel processing with large datasets. But since clusters scale so economically, they have become the dominant high performance computing architecture today.
Unfortunately, writing applications for clusters means the programmer has to deal with the hard realities of distributed memory, where data has to be shuffled from one node to the other so that threads can talk with one another and data can be kept in a coherent state. Thus was born the Message Passing Interface (MPI), the de facto standard for parallel programming communications.
In the May issue of CTWatch Quarterly, Thom Dunning, Robert Harrison and Jeffrey Nichols write: "Without fear of contradiction, the MPI standard has been the most significant advancement in practical parallel programming in over a decade, and it is the foundation of the vast majority of modern parallel programs."
But it's hard to find a real fan of MPI today. Most either tolerate it or hate it. Although it provides a widely portable and standardized programming interface for parallel computing, its shortfalls are numerous: hard to learn, difficult to program, no allowance for incremental parallelization, doesn't scale easily, and so on. It's widely acknowledged that MPI's limitations must be overcome to make parallel programming more accessible.
Dunning, Harrison and Nichols continue: "A completely consistent (and deliberately provocative) viewpoint is that MPI is evil. The emergence of MPI coincided with an almost complete cessation of parallel programming tool paradigm research. This was due to many factors, but in particular to the very public and very expensive failure of HPF. The downsides of MPI are that it standardized (in order to be successful itself) only the primitive and already old communicating sequential process (CSP) programming model, and MPI's success further stifled adoption of advanced parallel programming techniques since any new method was by definition not going to be as portable."
For the NWChem quantum chemistry application that the authors are discussing in the CTWatch article, the solution to MPI's limitations was the use of the Global Arrays (GA) Toolkit. The Toolkit provides a shared memory style programming environment for use with distributed memory computers. The basic context consists of distributed array data structures -- global arrays -- used as if they are stored in shared memory. The needed functionality for data distribution and data access is transparent to the programmer. The GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of high performance computers and acknowledges that access to a remote portion of the shared data is slower than to the local portion.
But while the physical nature of distributed memory has been abstracted, the GA interface still requires that explicit calls be added to the code in order to manage the global data.
Which bring us to Cluster OpenMP, a distributed memory version of standard OpenMP developed by Intel. Standard OpenMP is a widely used programming interface for creating parallel applications on shared memory architectures. It's been around since 1997. Like OpenMP, Cluster OpenMP does not require that the programmer invoke explicit library calls to achieve parallelization; this is accomplished with in-line compiler directives. Like GA, it abstracts the physically distributed memory, but it avoids both MPI's and GA Toolkit's reliance on library calls to make things happen. So you have the ability to switch off the compiler directives in the source code to restore your original serial program. Nice.
The Cluster OpenMP product was released on May 9 and is available with Intel's 9.1 Fortran and C++ compilers. Curiously, no press announcement was forthcoming from Intel about the release. But if you're wondering about Cluster OpenMP, you're in luck. In this issue, Intel's Jay Hoeflinger and Larry Meadows describe their new offering and how it can be used to turn an OpenMP program into a cluster-capable version.
This past week, Terracotta Inc., a vendor that provides scalable Java solutions for the enterprise, announced Terracotta 2.0., claimed to be the industry's first production-ready "clustered" Java Virtual Machine (JVM). In contrast with typical frameworks, Terracotta 2.0 clusters at the JVM level, instead of at the software application level, allowing application programmers to write normal Java code that will run transparently in clustered environments.
The Terracotta solution has some similarities to the Cluster OpenMP offering, inasmuch as it abstracts a cluster-wide shared memory. When shared Java objects are accessed by the application, Terracotta's cluster-aware software detects this at the intermediate byte-code level and reads/writes the data from/to the appropriate nodes to keep the objects coherent. Unlike Cluster OpenMP, the Terracotta solution requires no compiler directives; shared data is specified in the Java language itself.
In general, Java is not regarded as a conventional HPC language because of the run-time performance limitations related to its byte-code interpretive model. It's also a little weak in things such as floating-point/complex number support and control of low-level data layout. The Terracotta solution is geared towards high availability business applications that increasing need to scale out to large cluster environments. According to Terracotta engineers, their solution would also be very suitable for cluster and grid management tools, at the meta-level above the HPC applications.
In one of the great paradoxes of high performance computing, the most popular high-level languages for supercomputing applications -- C and Fortran -- are used not because they're so advanced, but because they're so primitive. C and Fortran source code maps easily to conventional CPU hardware, so the generated assembly code is able to achieve good performance. The result is that we end up using 30-year-old software languages to develop code for state-of-the-art supercomputers. Oh the irony!
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - May 18, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.