Although it’s been a full week since the stock market’s 1,000-point plunge and rebound, so far the sharpest minds on Wall Street still haven’t figured out exactly what caused it. At this point, the best guess is that, yes indeed, high frequency trading was involved, but perhaps not in the way most people were led to believe.
Although no one is really sure what set off the stock market calamity, some analysts have started to piece together the chain of events that spread panic for 10 minutes last Thursday before order was restored. According to a New York Times piece, the main cause was the disparity in how technology is allowed to operate across different exchanges.
After a weekend of analysis, many specialists at the major exchanges no longer believe that a single large sell trade in one stock, like that of Procter & Gamble, was the trigger, according to the people familiar with the investigation. Instead, they suspect that a mismatch in rules between the older New York Stock Exchange and younger electronic exchanges set off a frightening sequence of events.
Here’s how it supposedly went down: When the market plunge began, the NYSE powers that be decided to curtail most computer-based trading. That flooded the market with sell orders, and since the computers were offline, those orders just backed up. This forced the sellers to look for buyers on the newer exchanges where electronic trading was still going on. By then, the sell pressure had built up too much, further accelerating the market dive on these secondary exchanges. When the feedback loop had run its course, the algorithms kicked in again and started buying up the bargain-priced shares, restoring most of the lost market value.
In an article in the Wall Street Journal, Tomi Kilgore talks about how the investors have become dependent on “super-fast execution” to do their trades. But, he warns, that level of latency has its pitfalls. Kilgore writes:
One problem with machines is that some will trade on the next best available price, however erroneous that price might be. Ironically, how fast a trade can be made isn’t necessarily the best thing for a client during a “fast market.”
The market meltdown has managed to encourage some serious navel-gazing among the Wall Street set. For example, Michael Durbin at the New York Times, thinks that high frequency trading is fine and dandy, he does want to see some tighter regulations to ensure the computer systems don’t run amok. Essentially, he’s lobbying the SEC for greater transparency. And then there’s the software itself:
Indeed, the rapid development of automated-trading software and the maddening complexity of even the most simple systems make the introduction of technological errors inevitable. While it’s true that electronic exchanges require trading software to be certified before it is used, there is no market-wide standard for testing the software and nothing to effectively stop a firm from trading with uncertified software.
Meanwhile, David Weidner at MarketWatch seems to want to break Wall Street of its supercomputing habit. He’s not blaming the hardware or the algorithms per se, just the ability of people to use them for the greater good. He writes:
That’s why the claims by the high-frequency crowd that their beloved computers are benign are disingenuous. Yes, the computers are benign, except when placed in the hands of traders under tremendous pressure to maximize returns for the investors they serve.
My take on this matches pretty closely with Weidner’s. Skimming pennies off of millions of trades is probably not society’s optimal use of supercomputers and high-speed interconnects, especially considering the rather questionable role of these systems in improving market liquidity. Maybe more sobering is the fact that a week after the market meltdown, there’s still no complete explanation of how it happened. And if no one understands the interactions between the computers and the humans that run the markets, how does the system get fixed?