There were a couple of stories floating around the Intertubes in the past week or so that reminded me of how little we know about large classes of HPC applications. That’s not a good thing.
The first story is about the arrest of former Goldman Sachs computer programmer, Sergey Aleynikov, who allegedly made off with proprietary trading software the firm used to execute high volume, low latency automated trading. The news was covered in Bloomberg and elsewhere, and is a compelling tale of espionage in the financial services industry. Here’s the money quote, so to speak, from the Bloomberg report:
The proprietary code lets the firm do “sophisticated, high-speed and high-volume trades on various stock and commodities markets,” prosecutors said in court papers. The trades generate “many millions of dollars” each year.
The second story concerns espionage too — in this case, the more old-fashioned kind. Apparently the National Security Agency (NSA) is planning to construct a $2 billion dollar datacenter in Utah that will eventually consume 65 megawatts to power new supercomputers. The agency’s current facility in Fort Meade, Maryland, is already power-constrained, preventing the NSA from installing any more supers at that site. The Salt Lake City Tribune reported that the new Utah datacenter will be used to support the NSA’s intelligence-gathering mission. The Tribune used congressional budget documents to squeeze a bit more detail from the story:
The supercomputers in the center will be part of the NSA’s signal intelligence program, which seeks to “gain a decisive information advantage for the nation and our allies under all circumstances” according to the documents.
Of course, the speculation is that the NSA is using these supers to perform domestic spying — a program begun under the Bush administration and now being continued under the Obama regime. The Fort Meade machine is a Cray “Black Widow” super, which is reportedly sifting through emails and phone conversations to find out who’s been naughty and nice.
The nexus of these two stories is that secrecy prevents the public from knowing the full breadth of HPC applications. In the case of the government, it’s for national security reasons; for financial institutions, it’s to maintain competitive advantage. But it’s not just the financial industry and government intelligence domains (although these two areas are probably the most discrete in regard to technology transparency). You’ll notice, for example, that the energy industry, commercial biotech companies, and automobile/aerospace manufacturers aren’t giving public tours of their HPC datacenters either.
Even the TOP500 list reflects this secretive nature. The list is quickly becoming an anonymous record of supercomputing, where many of the users are only listed generically (for example, Financial Institution, Government, IT Provider, Semiconductor Company, and so on). This is especially true as you move toward the end of the list, where there are fewer public institutions. On the June 2009 TOP500 list, 78 of the bottom 100 supercomputers are listed anonymously.
By contrast, HPC being performed by national labs, supercomputing centers, and academic institutions is highly publicized, and tends to be very visible on the TOP500 list. These organizations are constantly on the prowl for grants, funding and other sorts of collaboration, so it pays for them to advertise what they’re up to. Plus researchers tend to be a talkative bunch anyway. I suspect this is the reason that the public generally only associates supercomputing with applications like climate modeling and looking for galactic black holes.
What’s the result of all this secrecy? Besides giving the public a skewed view of the industry, it also makes the technology invisible to a larger number of developers. Consider that most HPC apps are still implemented in legacy languages like Fortran and C, while “public” applications for personal computers or the Web are using more modern software frameworks, like Java, .NET, Python, etc. Even though HPC is not a volume industry in terms of software licenses, if more codes were public, you’d probably see a much more rapid development of libraries and tools (which is one reason why CUDA software has developed so quickly). Keeping software in silos makes for a lousy ecosystem.
The other aspect to secrecy is that it encourages the kind of bad behavior that Aleynikov and the US government are being accused of. There’s nothing inherently wrong with protecting state secrets and proprietary IP, but eventually the whole model can become self defeating. Consider this: arguably, two of the biggest catastrophes of this young 21st century are that of US intelligence regarding 9/11 and Iraq and the collapse of the financial industry. Both institutions relied on keeping information siloed to such an extent that even the institutions themselves couldn’t explain the data. And when the secrets are lies, nobody will know until it’s too late.