Visit additional Tabor Communication Publications
September 11, 2008
Developers in HPC have long lamented that you can either have performance or portability, but not both. You can get middling performance on many architectures with a single, portable code, but to get great performance, you need to tune your application for the features of a particular machine. For example, machine A, with a giant cache and slow inter-processor communication demands one approach to solving the problem, while machine B with vector processors, mid-sized cache, and fast interconnect demands another. This creates headaches for developers who have to manage multiple versions of an application for various platforms, inserting complex target-dependent logic into the application build process to ensure that users are able to get at the special features of their chosen platform.
For a while it looked like the need for platform-specific tuning might gradually fade away as the industry concentrated on the x86 architecture, but the rise of new architectures and re-introduction of hardware acceleration reminds us that in HPC the only constant is change. That's why development platforms like Gedae, pronounced "JEE-day," are getting renewed attention. Gedae is both the name of the product and the name of the company. The platform strikes a middle ground between performance and portability. Yes, you can separate implementation and specification, but you can also get into the implementation and tune to your heart's content. Gedae CEO Bill Lundgren is sure his company is on to something big, and he expects this approach to become the standard technique for creating efficient, portable, parallel applications on everything from desktop computers to Top 10 machines.
Gedae is a complete application development system. Development starts with the expression of application functionality at a very high level using a visual "widget-on-a-string" editor, which CEO Bill Lundgren describes as a "thinking tool" for scientists and engineers. This type of environment is most typically associated with data flow computing, and that's where Gedae's roots are. But the company has worked hard to expand the basic data flow approach, which originated in real-time signal processing, to include the expressiveness needed to specify a wide range of computational problems. As a result, today Gedae is used in applications ranging from image and radar processing to fluid flow simulation and distributed battle simulation.
The developer then moves on to specifying how the functionality should be implemented and creates a separate implementation specification to express the problem- or machine-specific details of the application. For example, while the functional specification might say "multiply matrix A and matrix B," the implementation file contains details of domain decomposition and architecture-specific tuning parameters (e.g., tile size). Because implementation and high-level behavior are separated, it is possible for scientists and engineers to work just on what their algorithm should do, while computer and computational scientists can be left to focus on how the final application should get those things done.
The Gedae compiler then takes both files as input and starts working on creating a parallel application. The compiler takes into account what machine it is compiling for, and uses information about the machine supplied to it in a hardware model to make very specific decisions about how to compile the application for best performance, lengthening vectors for machine A, and favoring fewer but larger messages on machine B.
The compiler also handles thread definition, task scheduling, and decomposition (following the rules built into the implementation file). This level of automation allows the compiler to worry about concurrency control and avoid deadlocks, in addition to optimizing memory sharing between threads or tasks. After the application is compiled, Gedae can monitor the performance of the application as it runs, and developers can iteratively refine the hardware-specific details based on actual application performance for better performance.
All of these features emphasize Gedae's belief that the compiler should do as much work as possible. "It is absolutely essential to have a new language for creating parallel programs," says Lundgren. "We also have to have a high degree of automation to produce reliable and accurate high performance codes, but to get there the compiler needs to have access to the same information the developer has."
Gedae got its start in 1987 at RCA as a project to support the DoD's need to program a set of newly acquired Connection Machines. The compiler was designed from the beginning to support multiple hardware architectures ("targets"), and as it was developed, it was also used to create applications for other computers of the day, including BBN's Butterfly. This was at a time when computer scientists were just starting to get excited about object-oriented computing, and the first versions of Gedae were built around an object engine that executed applications in a managed runtime. While the system worked and was well-received by users, the object engine made many runtime decisions and added a lot of execution overhead that made Gedae inappropriate in many scenarios.
In 1995 Gedae was re-engineered and adapted for better performance. Part of the new mandate was for Gedae to create code for embedded systems, which meant moving away from a managed runtime. Efficient execution became a driving force, and Gedae added a compiler. This is also when the original data flow model was expanded with the semantics needed to express a much wider range of problems than data flow graphs can typically express.
Developers today are using Gedae to build applications targeted for IBM's BlueGene and Cell processors, Intel's multicore processors, and a variety of other specialized hardware, for applications in medical, defense and scientific domains. Gedae offers the features that Lundgren believes are essential to performance-oriented programming in multi-processor and multicore environments.
"Gedae separates functionality from implementation, but still exposes the knobs that developers want to tune to get the best performance," he says. "Those knobs have good defaults on each platform, ensuring good performance for developers that lack the time or knowledge to do detailed performance tuning. Gedae's compile approach keeps the code lean, without time consuming conditionals and library calls that conventional approaches need, and provides the observability tools developers require to understand what the compiler did and how to tweak the implementation in response."
As Gedae plans out the near term for its namesake product, the company continues to focus on reducing complexity for developers and improving performance. An important upcoming change is the addition of more and better automation, including a rules-based engine for generating code that will adapt algorithms and implementation details based on a specific target platform, eliminating the need for developers to bring this knowledge to the process. This engine will also allow for the feedback of information gathered during profiling runs, enabling further support for performance tuning by the compiler. Another significant new feature in Gedae is the planned addition of a high level language, similar to MATLAB, to provide users with another path for specifying functionality.
"We are seeing an increasing number of cases from customers, especially in the last year, in which a Gedae-compiled application is outperforming hand-tuned code in real applications," says Lundgren. He also believes that the success of Gedae thus far, and more importantly the success of the application development model, are strong indicators that this approach will be an important part of the application developer's toolkit in the future.
As Lundgren puts it, "I expect this technique will become pervasive in the software development community. Whether Gedae itself becomes pervasive is up for debate, but we have proven that the strong separation of implementation and functionality works in practice and offers significant advantages for performance and productivity."
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.