Visit additional Tabor Communication Publications
September 28, 2007
Do you know where your algorithms will be running two years from now? Five? Ten? Are you investing in code today that you will need to throw away? What language should you choose today for your algorithms, to protect your investment for the future?
Every industry faces increasing interest and need for high performance computing. From automotive simulation to financial risk modeling, to systems biology and communication systems design, the need for raw computing power has increased dramatically, and will continue to do so. With the advent of high performance computing, hardware will provide the platform needed for this work.
Some fear that the explosion of diversity in hardware architectures means that the hardware available today will be replaced by something faster and better just beyond the horizon. This has always been happening, but the "C" single processor model remained while the processor architecture evolved. In the world of FPGAs, GPGPUs, many-cores, accelerators, multicores, clusters, grids, Cell processors and reconfigurable hardware, this model is not working. How do you choose a strategy that insulates you from these changes?
Many organizations have algorithm intellectual property locked into a particular language or environment that makes it virtually impossible to migrate to new technology. Often the experts who understand the subtleties of these codes and the particular optimizations made to get the "best performance" are not around anymore. With an uncertain future, prematurely selecting your architecture, language, and algorithm will require you to, at best, invest heavily in migrating the code, or at worst, live with legacy systems beyond their useful life.
So, what should software developers and domain experts be demanding from language providers to reduce the risk of algorithm obsolescence?
A best practice in software engineering is, where possible, to write a program in the simplest way that is the easiest to understand and maintain. Don't try to predict where the performance bottlenecks are going to be in the first pass. Once the algorithm is working correctly, run it to find the performance bottlenecks. Trying to optimize for performance before you have the right algorithm leads to speculative performance enhancements that make the code less readable and maintainable, and that doesn't address the underlying performance issues because you guessed incorrectly. This article applies this logic to language design for the future of high performance systems.
Languages should allow domain experts to develop the right algorithm as quickly as possible, without worrying initially about architectural nuances. To optimize your long-term investment in algorithms, you need to be able to express the algorithm in the highest level of abstraction possible, without prematurely adding architecture- or system-specific constructs.
In this two-pass model, domain experts, like the scientists and engineers who will be major consumers of high performance computing systems, should be able to express their ideas in a natural way, allowing them to explore their solution space rapidly. To maximize their productivity, these experts should be able to focus on their core competencies. For example, image processing experts should have at hand a language whose semantics, syntax and functions match the domain's normal expression of ideas. Allowing image processing experts to remain focused on the core algorithm concepts, rather than the mundane issues of memory allocation, threading or data handling, empowers them to rapidly create appropriate algorithms.
The second pass of this two-pass model is the ability for users to annotate the algorithm with additional information that will act as guides and input to the underlying execution engine in order to achieve optimal performance for a particular architecture. This might include annotations to describe parallelism in the algorithm. Clearly, there are situations where architecture drives algorithms, and a distinct two-pass model is infeasible.
A better approach would be for the language to require no annotation to make optimal use of a particular architecture. This "fully implicit" system requires only a single pass performing operations such as automatic parallelization. This is an active research area, with currently no general solution. So for the foreseeable future, some annotation will be needed to provide clues to the particular execution engine to perform optimally. Such a system can be described as "minimally explicit," where the minimum amount of explicit information is needed to assist the execution engine in producing optimal performance.
An example of such an annotation is the PARFOR construct available in MATLAB. By annotating the FOR as a PARFOR, the user is annotating the FOR loop with information that says the contents of the FOR loop may be executed in any order. If additional computational resources are available, the underlying execution engine can evaluate the code in parallel for faster results. When executed on a single processor system, PARFOR behaves like a traditional FOR loop. When executed on a multicore machine or cluster, PARFOR can make use of the additional computational resources to evaluate the code in parallel for faster results. With an annotated approach, the same algorithm can be rendered to run on a single core CPU, a multicore shared memory system, a cluster or some specialized accelerator, for optimal performance.
Another example of such an annotation are the SPAWN and SYNC constructs provided by Cilk (http://supertech.csail.mit.edu/cilk/). Cilk is an algorithmic multithreaded language. The philosophy behind Cilk is that a programmer should concentrate on annotating the program to expose parallelism and exploit locality. Traditional serial C-code can be annotated, and when coupled with the Cilk runtime system, efficiently scaled for large-scale threaded operation.
The advantage of such an approach is that the user makes a simple language substitution to provide additional information to the underlying execution engine. There is minimal mental load on the user to take full advantage of the hardware available.
How does this relate back to the two-pass model? A good example is a financial quantitative analyst attempting a Monte Carlo risk analysis of a portfolio. Running on a standard PC, the analyst would build a model in MATLAB, using specific financial modeling algorithms and components and traditional FOR loops to iterate over an extensive set of scenarios. Performance issues might arise due to the computational complexity of the problem. Having recently acquired a multicore machine, the analyst might be able to use the same program, modifying the FOR to a PARFOR. With the advantage of the new hardware would come a proportional speed up. Needing to cut execution time even further, the analyst might run the same PARFOR code on a departmental cluster and achieve the desired performance speed up.
With minimal effort and without knowledge of the underlying system, this domain expert was able to achieve high performance by minimally annotating his initial code. This is the kind of result high performance computing users should demand from all languages in the future.
About the Author
Dr. Roy Lurie is vice president of engineering at The MathWorks, Inc. He is responsible for the MATLAB family of products, which includes dedicated teams in the areas of language execution, parallel and distributed computing, image processing, control design, financial modeling and analysis, test and measurement, and computational biology. He received his Ph.D. in Electrical Engineering from the University of Witwatersrand in South Africa in 1994. Prior to joining The MathWorks in 1994, he founded and operated OptiNum Solutions, selling MathWorks tools into the South African market.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.