Visit additional Tabor Communication Publications
December 09, 2005
Expanding role of high-performance computing
As Moore's law has stalled on the desktop, scientists and engineers in virtually every field are turning to high-performance computing to solve some of today's most important and complex problems. With simulation increasingly replacing physical testing, more complex phenomena being modeled, and whole products or systems being simulated, technical computing -- whether in the life sciences, manufacturing, energy, intelligence, defense, or earth sciences -- has become both more prominent and more challenging.
In some ways, the industry is at a cross-road. An increasing number of problems demand parallel computing power, and high-performance computers are becoming remarkably more powerful, and affordable. But the "software gap," the gap between hardware capabilities and actual benefits we can practically extract through programming, is wide and growing. There is a dearth of applications available for parallel computers, and custom development of parallel applications is fundamentally flawed. This article will summarize the state of the industry today, predict how the industry may evolve over the next couple of years, and present some choices available for engineers, scientists, and analysts trying to maximize the return on their HPC investments.
Two distinct computing environments
Today, there are two distinct environments for technical computing: the desktop, and the high-performance computer. Both environments have much to offer, but the growing disconnect between them presents a substantial challenge, one that must be overcome if the power of today's and tomorrow's parallel computers is to be harnessed.
The launch of the Windows 95 operating system ushered in the era of the desktop computer as the primary science and engineering computing vehicle, particularly during the early stages of new product or system modeling, simulation, and optimization. The interactivity offered by these tools lends themselves well to the iterative process of research and discovery.
Today, millions of engineers and scientists have access to a rich set of interactive high-level software applications that, broadly speaking, fall into one of two categories: 1) very high level languages (VHLLs) for custom application development, such as MATLAB, Mathematica, Maple, Python, or IDL, and 2) vertical applications developed by commercial independent software vendors, such as SolidWorks for computer-aided design or Ansys for finite-element analysis.
The desktop tools offer an easy way to manipulate high-level objects (e.g., matrices with MATLAB, or parameter-driven geometric features in SolidWorks), hiding many of the underlying low-level programming complexities from the user. Furthermore, the desktop tools provide an interactive development and execution environment, the usage mode needed for productivity in science and engineering.
Things are very different in the HPC world. To begin with, there are relatively few commercially-available software applications for high-performance computers -- less than 5 percent of the desktop science and engineering applications run on the parallel architectures of high-performance computers. Given the difficulty associated with parallel programming, and the migration of technical computing to the engineer's desktop over the last decade, the business model for commercial ISVs to develop HPC applications is tenuous at best.
As a result of the limited application availability, and compounded by the specialized nature of the models and algorithms, a great deal of technical applications for parallel computers are custom-developed by the end users. The specification is typically some combination of a prototype program written in a desktop-based high-level application (MATLAB, etc.), and "prose" that attempts to capture the particular model, system, or algorithm. A parallel programming specialist then writes the application in C, Fortran, and MPI (message passing interface) used for inter-processor communication and synchronization - relatively complex low-level programming. Only after the application is developed for the high-performance computer can it be executed to allow testing and scaling with the real data.
This process is fundamentally flawed: it is slow, expensive, inflexible, and remarkably error-prone. First, the specialist -- of which there is a serious shortage, and they are expensive -- must correctly interpret the end user's specification. Second, the end user must then interpret the test results to understand whether problems encountered are due to his specification, or the specialist's code.
Because each of these steps can be several months, scientists and engineers are limited to how much iteration to the algorithms and models they can make. And remember -- this all happens before they ever get to actual usage of their models, solving the problems they have set out to solve. More than 75 percent of the "time to solution" is spent programming the models for use on high-performance computers, rather than developing and refining them up front, or using them in production to make decisions and discoveries.
The need for interactivity
Even after the algorithm or model has been reprogrammed for a parallel computer, we are not out of the woods, because today the resulting usage mode is strictly batch-oriented, rather than interactive. Given how long typical runs take -- hours, days, or even weeks -- the iteration and refinement of the algorithm or model is severely limited. In many cases, the correct algorithm, approach, or key to the problem, may not be known up front, and may typically be discovered only by running the code on the HPC, with the actual input data. And it is highly likely that a change in the user's specification -- due to errors, changed algorithms, changed application requirements, or changed hardware -- will cause a major rewrite by the specialist.
When reducing time-to-solution is the goal, it is the engineers' and scientists' time that is typically the precious resource, not computing cycles. During the model/algorithm development phase, interactivity is critical. Yet although interactive use can be taken for granted with desktop science and engineering tools, to date it has simply not been available in high-performance computing, which remains firmly in the batch world.
Solving the application problem
What does the ideal solution for custom HPC application development look like? It must enable scientists and engineers to write applications in their favorite very high-level languages (VHLLs) on desktops, and have them automatically parallelized and able to run interactively on high-performance computers. In other words, let the end users continue to work in their preferred environments ("no change in religion"), hide from them parallel programming challenges, and let them more easily access the parallel computing power of high-performance computers. They can prototype and scale in a tightly coupled process, in real time, with fine-grained control of both algorithms and data, transparently harnessing the HPC's computing resources.
With this approach, you could write just enough of the application in a VHLL to start testing with real data, as you incrementally refine the application. In other words, with an interactive workflow, the time to "first calculation" (or, in a sense, "partial satisfaction") can be within minutes, rather than the several months or years required to first program the parallel application. Interactive usage would allow you to observe the application run, stop it, change parameters, change what is observed, and (possibly) decide to further refine the application. Such a process with incremental refinement and continuous testing is key to modern high productivity software development methodology -- learning happens only through feedback.
There is some early evidence that the industry is gravitating to such an approach. In the last several months, several VHLL software vendors have entered the market with parallel solutions that bridge desktops and high-performance computers, and the growing choice and competition is ultimately good news for end users. Needless to say, there isn't one silver bullet, one right choice for every situation, and ultimately users will vote with their wallets. Following are four predictions as to what the landscape will look like 3-4 years out:
1. There will be much less C, Fortran, and MPI coding by parallel programming specialists;
2. VHLL-specific workgroup environments -- such as Distributed Computing Toolbox from The MathWorks -- will do well for algorithms that naturally lend themselves to being partitioned into independent tasks (sometimes known as coarse-grained or "embarrassingly parallel"), and for teams where a single software tool (e.g., MATLAB) is the preferred technical computing environment;
3. There will emerge the notion of an interactive parallel computing platform, software that automatically and transparently links high-performance computers with many popular desktop tools. Such a platform will likely get the most traction in particularly large and complex problems requiring both fine- and coarse-grained parallelism, and in heterogeneous environments where multiple VHLLs are deployed and flexibility is key. There are several research projects underway in this area, and Star-P from Interactive Supercomputing is the first commercial product in this category.
4. Some efforts -- at Sun, IBM, and elsewhere -- are currently underway to develop a new computer language that may simplify programming of parallel computers. The focus of this effort is the programming of petascale-class computers, around the 2010 time frame. How broadly they are adopted will likely be inversely proportional to how "new" the new language would be in terms of usage constructs.
Once programming barriers are lowered, it is expected that many more scientists and engineers will experience parallel computing for the first time; development of custom HPC codes will take weeks or months, instead of months or years, and high-performance computers will be used just as interactively as our desktop PCs are today.
About the Author
Ilya Mirman is Vice President of Marketing at Interactive Supercomputing (ISC). Prior to joining ISC, Ilya was Vice President of Marketing at SolidWorks, a provider of mechanical design software. In this role, Ilya helped establish SolidWorks as the standard in 3D mechanical design software, used by hundreds of thousands of engineers worldwide. Prior to that, he led the product development team at Corning-Lasertron to introduce a new line of high-speed laser transmitters for the telecom industry. Ilya holds a BSME from the University of Massachusetts, an MSME from Stanford University, and an MBA from MIT's Sloan School. For more information about Interactive Supercomputing visit www.interactivesupercomputing.com.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.