Last week, HPC veteran John Gustafson was named CEO of Massively Parallel Technologies (MPT), a developer of HPC acceleration technology. Using funding from about 300 private shareholders, the Colorado-based company is in the process of commercializing technology that aims to dramatically enhance the performance and utility of high performance computing clusters. Similar to his previous engagements at Sun Microsystems and ClearSpeed Technologies, where he worked on cutting-edge technology programs, Gustafson joins his new company with the goal of introducing a game-changing product into the HPC market.
“I don’t tend to change companies unless I truly have got something I believe in that I think is going to change the world,” declared Gustafson.
This is his first time in the role of CEO. Gustafson comes to MPT from ClearSpeed, where he had served as the chief technology officer since September 2005. Prior to that, he was the principal investigator for Sun’s High Productivity Computing Systems (HPCS) project. He spent his early career on the technical side, gathering numerous accolades, including the Gordon Bell Award in 1988. In that year he devised the now famous Gustafson’s Law, which states that any sufficiently large problem can be efficiently parallelized.
Scott Smith, the previous MPT chief and now chairman of the board, is an entrepreneur who was tasked with raising funds for the company as it prepared to bring its technology to market. Smith previously made money in a number of domains, including citrus farming and publishing children’s books. Kevin Howard, the MPT chief technology office and co-founder, is a self-taught engineer. Howard and computational physicist Dr. James Lupo are the driving forces behind the company’s HPC acceleration technology.
Notably lacking at the company was someone with HPC industry background who could relate to the community. While Gustafson is not your typical MBA type, over the last several years at Sun and ClearSpeed, he has split his time between technical and marketing/product development roles. “What the shareholders were looking for was somebody who was actually from the high performance computing industry,” said Gustafson, “who would be likely to know exactly what works and what doesn’t.”
The HPC acceleration technology under development is based on some of the work accomplished under the DARPA HPCS program. MPT was one of a handful of smaller companies that obtained DARPA funding in the first phase of the HPCS program, alongside the original big five HPC system manufacturers (Sun Microsystems, HP, SGI, IBM, Cray). Of these smaller firms, only MPT made it to the second phase of HPCS. It was at this point that Gustafson, who was working at Sun at the time, met up with the Massively Parallel team and became aware of the HPC acceleration work they were developing.
MPT’s HPCS work centered on a software technology that was able to improve inter-processor communication substantially. The new model utilized parallel processors much more efficiently than standard MPI and allowed for improved application scaling. The downside was that it required changes to software that necessitated application recoding.
The company’s larger goal was getting the HPC community to adopt its software. Subsequently, the MPT developers ported a number of applications including a BLAST implementation, which they commercialized. According to Gustafson, the people at MPT “were under the misimpression that they could get a large part of the community to do the same,” unaware of the amount of effort this would require for the large repository of legacy HPC codes. At that point, MPT went back to the drawing board and began developing a new technology that did not require users to change their MPI code, while still providing a substantial inter-processor communication speed-up.
Although Gustafson was not willing to offer a lot of details about the new technology, he says the product under development will provide a comprehensive solution that turns Linux clusters into monolithic supercomputers. He hinted that it will be implemented in both hardware and firmware and will transparently accelerate MPI codes. According to him, customers have been asking for something like this for awhile, as they look for ways to improve parallel processing on these commodity systems. “People are going to see much higher fractions of peak speed on single applications than they have with typical Linux clusters running MPI,” explained Gustafson, “even though they’ll still be running MPI.”
The goal is not only to make MPI programs run quicker, but also to make better use of the computing resources at hand. For a lot of large HPC clusters that have been installed over the past few years — especially the current crop of super-clusters that are working their way up the TOP500 list — the only time all the processors are working in concert is when the Linpack benchmark is run. After the real applications move in, these “supercomputers” become capacity machines. While capacity HPC has its uses, for supercomputing aficionados, job parallelism is boring.
“To me that’s the dirty little secret of HPC right now,” said Gustafson. “We’ve lost sight of true supercomputing, which is capability computing.”
The company has surveyed the landscape and believes it will have a unique product when it comes to market. There are other MPI accelerators or offload engines around, but Gustafson thinks the feature set they are intending to offer will be distinctive, and the underlying technology won’t be easy to duplicate. The company expects to start shipping products in less than a year.