Visit additional Tabor Communication Publications
September 23, 2008
The confluence of the U.S. financial meltdown and this week's High Performance on Wall Street conference in New York might be one of those coincidences that's trying to tell us something. To be honest, I'm not a big believer in cosmic happenstance, but in this case it made me wonder if the financial software models had anything to do with our current economic chaos. I didn't have to look very hard to find some correlation.
A great post by Saul Hansell at the New York Times explained why many of the risk models developed by quants didn't see the brick wall at the end of the tunnel (see How Wall Street Lied to Its Computers). According to Hansell, there were multiple points of failure at these firms, but in many cases the quantitative models themselves hid the risks they were supposed to be revealing. Writes Hansell:
Ultimately, the people who ran the firms must take responsibility, but it wasn’t quite that simple. In fact, most Wall Street computer models radically underestimated the risk of the complex mortgage securities, they said. That is partly because the level of financial distress is “the equivalent of the 100-year flood,” in the words of Leslie Rahl, the president of Capital Market Risk Advisors, a consulting firm. But she and others say there is more to it: The people who ran the financial firms chose to program their risk-management systems with overly optimistic assumptions and to feed them oversimplified data. This kept them from sounding the alarm early enough.
That sentiment reflects a recent conversation I had with Jerry Hanweck, of Hanweck Associates, a firm that develops quantitative finance products. He told me some of the high profile hedge funds that lost a lot of money last year were also relying on limited historical data to drive their models. Especially in high frequency trading and arbitrage trading situations, Hanweck thinks the traders often misapply their statistics. According to him, when you gather all this random data together and run regression analysis on it, some of the results are going to look reasonable, just by chance. "If you try to extract too much from the limited amount of data that we have available to us, you really can overfit the data," he explains.
In some cases though, the inverse problem occurred. Hansell writes that some models were designed to dilute the risk by looking too far back -- into the last several years of trading history versus just the last several months -- when things were starting to get dicey. This hid short-term volatility behind a mask of long-term stability. But to keep profits flowing, Wall Street execs had a vested interest (literally) to keep these less-than-stellar models humming along.
Many economists think that the 2007 credit crunch that launched the current downward financial spiral was set in motion by the now notorious collateralized debt obligations or CDOs. These instruments had become infested with devalued subprime loans, and at some point it became clear to investors that the risk associated with CDOs was a lot larger than originally thought.
According to Hanweck, because of the complexity of CDOs, the risk of these instruments is based on simplified assumptions. In some cases, limits in computational power made these simplifications necessary so that the valuation models could be run. "That's what really started the problems last year and even back in 2005, when GM and Ford had their first batch of hiccups," he says. The nature of these CDOs suggests that the buyers -- investment banks, commercial banks, insurance companies, and other institutions -- were engaging in faith-based capitalism.
And what about the subprime mortgages that started it all? Well, devising and selling these packages didn't have much to do with computers or quantitative models. Says Hanweck: "That was just plain old greed."
Posted by Michael Feldman - September 22, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.