All sorts of interesting software used to be created by individuals and small teams. My college roommate, for example, was a successful video game developer in the 1980s. He coded all aspects of the games himself, from the graphics to the game logic. But today, game development requires a team of programmers and many other specialized talents, including artists, musicians and lawyers to license the most popular characters.
HPC is no different. Developing code often requires substantial effort. Even relatively small applications benefit from experts in areas that are rarely mastered by a single individual. Let’s imagine that a biologist has invented a way to analyze images of cells that provides a more accurate assessment of the functioning of the cell’s membrane. The algorithm is computationally intensive and most projected uses will study large numbers of cells.
What team makeup of people is needed to develop this code?
Initially, the idea is likely to come directly from a domain expert. These are people who understand the problem being solved. In our image processing example, the biologist understands the workings of cells and more specifically the science behind the processing to be done. At some point, the software will produce a result and somebody has to know if that result is correct and has value.
Some domain experts are quite knowledgeable about the math used in their fields, and some may have experience running these algorithms on small to medium sized HPC systems. But typically the domain experts will want or need help in creating scalable, parallel implementations of their code.
Enter numerical algorithms specialists, who typically have math, applied math or perhaps computer science backgrounds. Their in-depth understanding of numerical methods enables them to invent new ways to do equivalent computing based on the targets for accuracy, parallelism and the constraints of specific hardware. They may have an interest and understanding of the science motivating the work … or they may not.
The relative importance of development time vs. delivered performance needs to be weighted carefully as this will affect basic development strategy. If the code is needed quickly and is expected to change often, it may be appropriate to develop in an environment designed for speed of implementation such as MATLAB or Python and use a tool like Star-P to obtain scalability.
If the project is just collaboration between two brilliant people, then life is easy. But today’s codes usually require a team of people working over an extended period of time. This raises the need for more skills in the development group.
Often times, our domain expert and numerical specialist will lack expertise in the more general field of software engineering. How will the code be structured to facilitate maintenance and growth in functionality over time? What are the potential hardware platforms to be supported and how does this affect code architecture? Which tools, libraries and middleware will be adopted as critical elements in the project and which will be avoided? What is the build process? Code repository? How will a user install the code? A successful project needs some attention to architecture and the development process. And of course, any software engineer worth his keyboard will quickly raise the issue of quality assurance.
A project will need bug tracking, automated regression testing and metrics to ensure quality. Some software engineers are quite good at enforcing a thorough quality discipline. But this often requires a dedicated resource focused solely on maintenance of the test infrastructure, implementation of tests and overall management and assessment of the quality of the code. The art and science of quality assurance have evolved to the point where somebody expert in building a product is not very likely to be the most expert at building and managing a quality assurance operation.
One area of assessment which often needs special attention — especially in HPC — is that of performance tuning. At first look, performance tuning may seem to be as easy as running a benchmark and working on areas that appear to be performing badly. But if you look at successful organizations or projects, you will usually find one or more people who have made performance analysis, benchmarking and tuning their life’s work and who make a tremendous difference in delivered performance. Most software is simply too complicated, and the hardware it runs on too variable to predict and optimize performance in the abstract. Only the very largest and long lived software products or codes succeed in building a useful and accurate performance simulation of their operation. In most cases, an organization develops a set of benchmarks and then adds and refines the set to be representative of customer uses over time.
So continuing the example above, our biologist had his idea, found somebody to express it in useful mathematics, and had help in structuring the software, testing and tuning the performance. What is left?
Assuming this new code will be used outside of the organization that developed it — and even if it isn’t — the biologist should not underestimate the work required and the benefit provided by accurate and thorough documentation. Depending on the nature of the code, it may be more than just writing down the features. The more innovative it is, the more attention will be needed to design an effective set of documentation (perhaps including training materials). Again, in most cases finding people with experience and skill in building documentation sets will be of great benefit.
With a group of people developing code, running tests and benchmarking, hardware requirements will certainly have grown to a point where system administration tasks begin to take a significant amount of time. Often times, the biggest problem with system administration tasks is that one or more of the senior and brilliant software engineers will handle the “hobby” of system administration. It’s one way to get started, but not a long term solution. This area is also changing rapidly based on the availability of hosting centers and cloud-based resources, but even a small investigation into using these resources quickly shows that nothing comes for free (in time or money).
Licensing can be a painful issue if there is no strategy from the very beginning of a project. Will the code be open source? What license will be used? Most codes today tend to include as subsystems codes developed by others, often under different license terms. Legal advice and a solid strategy that is understood by your whole team is a necessity.
To foster teamwork, time and effort must be spent on managing individuals and the group — as the old saying goes “armies fight on their stomachs.” In other words, “don’t forget beer and pizza.” This raises the question: “how universal is the custom of team building through spending time eating and drinking?” (and specifically, if there is eating and drinking, how often is it pizza and beer?). This prompted a small, informal, international survey to see what others do to bring teams together and build camaraderie (See Figure 1.)
Figure 1
One of the worst things that can happen to a project is to stop development before completion due to a lack of funds — or if it is an open source group project, to remain incomplete because a loss of contributors. Diligent tracking of progress against resources is required. Maintaining funding and contribution levels usually requires continuous effort on the part of the project leader.
In conclusion
Success depends on contributions from a wide variety of skill sets. Those skill sets are found in people who each have their own work habits, communication styles and expectations. At least in the US, they are also likely to literally come from a surprisingly large number of nations. So for optimum team results, have your empanadas, wine and some karaoke at the ready and watch your development project take off.
About the Author
David Rich is the Vice President of Marketing at Interactive Supercomputing. David brings to ISC more than 25 years of marketing, sales and support experience in both large and entrepreneurial high tech companies. At AMD he directed the company’s entry into HPC, initiating the transition to 64 bit x86 as the HPC processor of choice. While at AMD, he also served as president of the HyperTransport Consortium, a standards organization for high-speed interconnect technology. David’s earlier experience includes being the founding manager of the TotalView product line, which has become the de facto standard for parallel and distributed debugging. He served as vice president of Fujitsu System Technologies, which developed high-speed networking technology that was a pre-cursor to InfiniBand. His parallel processing experience started at BBN Technologies where he worked on the Butterfly series of computers. David received a bachelor’s degree in computer science from Brown University.