Visit additional Tabor Communication Publications
June 22, 2007
... or is it the other way around?
If you want to know where high performance computing is headed, just follow the money. In particular, look at how aggressively Wall Street is applying advanced computing infrastructure in their quest to expand profits. A Microsoft-sponsored capital markets survey released this week showed that nearly 80 percent of the industry respondents said they would be expanding their HPC capacity in the next 12 to 18 months. Microsoft cites the demand for faster financial data analysis as a driver, but the rapid increase in volume of market data is also fueling Wall Street's interest in high performance infrastructure.
The relentless downward spiral in the cost of computing hardware is helping to make this build-out possible. But almost half of the survey respondents reported that performance was more important to them than price. This explains why the financial industry always seems to be the first one in line when products based on newer technologies like the Cell processor, GPUs, or stream computing are announced. That said, most Wall Street firms are still using vanilla clusters to drive their financial analytics and modeling codes. And 24 percent of the respondents said they plan to increase their HPC capacity by 1,000 nodes or more by the end of 2009.
"This research confirms what we've been witnessing in the market -- that capital markets firms remain on the cutting edge, and that the dot-com bust of the early 2000s has now turned into a period of reinvestment for firms seeking technologies to help them grow," said Craig Saint-Amour, director of capital markets solutions in the U.S. Financial Services Group at Microsoft.
But it's not just hardware. Software is at the center of the revolution going on in the financial service industry. Algorithmic trading is all the rage in Wall Street. Using low-latency exchange data feeds that can deliver up to a million messages per second, algorithmic trading platforms are proliferating. As a result, the number of human traders is plummeting at the same time the number of actual trades is skyrocketing. According to Aite Group LLC, a consulting group for the financial services industry, the share of algorithmic trading was about one third of the total equities trading volume at the end of 2006. By the end of 2010, they estimate that approximately half of all equities trading will be done through algorithmic trading.
And as more firms automate their trade processes, the competition to deploy the smartest, fastest trading software is escalating. Financial firms are scrambling to hire quantitative analysts, or quants, the computer science geeks who devise killer algorithms for financial analysis applications. The idea is not just to outperform traders, but outperform the competition's software as well. For some classes of transactions, even a millisecond interval can mean the difference between profit and loss. In this type of environment, mere mortals don't have a chance.
"I don't think traders are going to disappear, but if you look at salaries on Wall Street as one indicator of the future, there are quants there now that are being paid multi-million dollar bonuses," observes Kevin Pleiter, Director, Global Financial Services Sector at IBM. "These are the ones generating the millions and millions of dollars of profit for these firms."
As trading algorithms evolve, higher levels of intelligence are being built into them. In a recent article at Bloomberg.com, Jason Kelly writes about how the next generation of quants are working toward incorporating artificial intelligence into their codes. For example, the quants are looking into using natural language processing to extract information from news reports and correlate that information with its effect on financial markets. The idea is to mimic human-like intuition, but do so at the speed of the microprocessor. Some of the challenges are immense, but the motivation to take the human out of the loop is just as large.
If you think Kelly's article describes a world of the distant future, you may be surprised to learn that commercial solutions are already emerging. This week at the SIFMA Conference, IBM previewed a software framework that could enable these next generation trading applications. The framework, called System S, provides an enterprise stream processing environment that encapsulates and manages real-time analytics applications. Although IBM steers clear of the AI nomenclature, System S is clearly meant to appeal to those looking for more human-like analysis in their software. Not surprisingly, IBM's first target for this technology is Wall Street. (Our feature article in this week's issue takes a look at how System S works.)
While these super-intelligent financial analytics codes may be fighting it out between each other in the not-too-distant future, apparently you don't necessarily need software of this caliber to beat the traders. Even using today's technology, the quants' algorithms have done a decent job of humbling the humans. As noted by Kelly in the Bloomberg article:
"The computers have done well. A November 2005 study by Darien, Connecticut–based Casey, Quirk & Associates, an investment management consulting firm, says that from 2001 to '05, big-cap U.S. stock funds run by quants beat those run by nonquants. The quants posted a median annualized return of 5.6 percent, while nonquants returned an annualized 4.5 percent. Both groups beat the Standard & Poor's 500 Index, which returned an annualized negative 0.5 percent during that period."
The shape of things to come?
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - June 21, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.