Visit additional Tabor Communication Publications
September 11, 2008
Developers in HPC have long lamented that you can either have performance or portability, but not both. You can get middling performance on many architectures with a single, portable code, but to get great performance, you need to tune your application for the features of a particular machine. For example, machine A, with a giant cache and slow inter-processor communication demands one approach to solving the problem, while machine B with vector processors, mid-sized cache, and fast interconnect demands another. This creates headaches for developers who have to manage multiple versions of an application for various platforms, inserting complex target-dependent logic into the application build process to ensure that users are able to get at the special features of their chosen platform.
For a while it looked like the need for platform-specific tuning might gradually fade away as the industry concentrated on the x86 architecture, but the rise of new architectures and re-introduction of hardware acceleration reminds us that in HPC the only constant is change. That's why development platforms like Gedae, pronounced "JEE-day," are getting renewed attention. Gedae is both the name of the product and the name of the company. The platform strikes a middle ground between performance and portability. Yes, you can separate implementation and specification, but you can also get into the implementation and tune to your heart's content. Gedae CEO Bill Lundgren is sure his company is on to something big, and he expects this approach to become the standard technique for creating efficient, portable, parallel applications on everything from desktop computers to Top 10 machines.
Gedae is a complete application development system. Development starts with the expression of application functionality at a very high level using a visual "widget-on-a-string" editor, which CEO Bill Lundgren describes as a "thinking tool" for scientists and engineers. This type of environment is most typically associated with data flow computing, and that's where Gedae's roots are. But the company has worked hard to expand the basic data flow approach, which originated in real-time signal processing, to include the expressiveness needed to specify a wide range of computational problems. As a result, today Gedae is used in applications ranging from image and radar processing to fluid flow simulation and distributed battle simulation.
The developer then moves on to specifying how the functionality should be implemented and creates a separate implementation specification to express the problem- or machine-specific details of the application. For example, while the functional specification might say "multiply matrix A and matrix B," the implementation file contains details of domain decomposition and architecture-specific tuning parameters (e.g., tile size). Because implementation and high-level behavior are separated, it is possible for scientists and engineers to work just on what their algorithm should do, while computer and computational scientists can be left to focus on how the final application should get those things done.
The Gedae compiler then takes both files as input and starts working on creating a parallel application. The compiler takes into account what machine it is compiling for, and uses information about the machine supplied to it in a hardware model to make very specific decisions about how to compile the application for best performance, lengthening vectors for machine A, and favoring fewer but larger messages on machine B.
The compiler also handles thread definition, task scheduling, and decomposition (following the rules built into the implementation file). This level of automation allows the compiler to worry about concurrency control and avoid deadlocks, in addition to optimizing memory sharing between threads or tasks. After the application is compiled, Gedae can monitor the performance of the application as it runs, and developers can iteratively refine the hardware-specific details based on actual application performance for better performance.
All of these features emphasize Gedae's belief that the compiler should do as much work as possible. "It is absolutely essential to have a new language for creating parallel programs," says Lundgren. "We also have to have a high degree of automation to produce reliable and accurate high performance codes, but to get there the compiler needs to have access to the same information the developer has."
Gedae got its start in 1987 at RCA as a project to support the DoD's need to program a set of newly acquired Connection Machines. The compiler was designed from the beginning to support multiple hardware architectures ("targets"), and as it was developed, it was also used to create applications for other computers of the day, including BBN's Butterfly. This was at a time when computer scientists were just starting to get excited about object-oriented computing, and the first versions of Gedae were built around an object engine that executed applications in a managed runtime. While the system worked and was well-received by users, the object engine made many runtime decisions and added a lot of execution overhead that made Gedae inappropriate in many scenarios.
In 1995 Gedae was re-engineered and adapted for better performance. Part of the new mandate was for Gedae to create code for embedded systems, which meant moving away from a managed runtime. Efficient execution became a driving force, and Gedae added a compiler. This is also when the original data flow model was expanded with the semantics needed to express a much wider range of problems than data flow graphs can typically express.
Developers today are using Gedae to build applications targeted for IBM's BlueGene and Cell processors, Intel's multicore processors, and a variety of other specialized hardware, for applications in medical, defense and scientific domains. Gedae offers the features that Lundgren believes are essential to performance-oriented programming in multi-processor and multicore environments.
"Gedae separates functionality from implementation, but still exposes the knobs that developers want to tune to get the best performance," he says. "Those knobs have good defaults on each platform, ensuring good performance for developers that lack the time or knowledge to do detailed performance tuning. Gedae's compile approach keeps the code lean, without time consuming conditionals and library calls that conventional approaches need, and provides the observability tools developers require to understand what the compiler did and how to tweak the implementation in response."
As Gedae plans out the near term for its namesake product, the company continues to focus on reducing complexity for developers and improving performance. An important upcoming change is the addition of more and better automation, including a rules-based engine for generating code that will adapt algorithms and implementation details based on a specific target platform, eliminating the need for developers to bring this knowledge to the process. This engine will also allow for the feedback of information gathered during profiling runs, enabling further support for performance tuning by the compiler. Another significant new feature in Gedae is the planned addition of a high level language, similar to MATLAB, to provide users with another path for specifying functionality.
"We are seeing an increasing number of cases from customers, especially in the last year, in which a Gedae-compiled application is outperforming hand-tuned code in real applications," says Lundgren. He also believes that the success of Gedae thus far, and more importantly the success of the application development model, are strong indicators that this approach will be an important part of the application developer's toolkit in the future.
As Lundgren puts it, "I expect this technique will become pervasive in the software development community. Whether Gedae itself becomes pervasive is up for debate, but we have proven that the strong separation of implementation and functionality works in practice and offers significant advantages for performance and productivity."
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.