Visit additional Tabor Communication Publications
August 05, 2010
High frequency trading (HFT) is back in the news, and as seems to be the trend lately, it has become the recipient of yet more criticism. For those of you who invest in the stock market in the old-fashioned way (i.e., via your broker or through your retirement plan), HFT is a type of algorithmic trading that uses high-end computers, low-latency networks, and cutting-edge analytics software to execute split-second trades. Unlike long-term investing, the strategy is to hold the position for extremely short periods of time, the idea being to make micro-profits from large volumes of trades. In the US, it is estimated that 70 percent of the trade volume is executed in the HFT arena.
The recent news relates back to the 1,000-point stock market bounce -- the so-called "Flash Crash" -- that occurred on May 6. At the time, there was plenty of speculation flying around (including some by me) that HFT was involved in one way or another. But some recent detective work by market analysis firm Nanex suggests that questionable behavior by these HFT systems may have had a more direct role in the market chaos back in May.
Alexis Madrigal's article in The Atlantic gives a layman's account of what the Nanex techies uncovered with some clever algorithmic sleuthing. The Atlantic piece, which I came across by way of a related article in Ars Technica, points out the HFT shenanigans were uncovered when Nanex computer engineer Jeffrey Donovan went beneath the covers to look at trading that day in millisecond-level timeframes, a level of granularity that never shows up on stock charts.
What Donovan found was evidence of "quote stuffing," a term that refers to the practice of sending large volumes of bids -- on the order of hundreds or thousands a second -- without the intent of executing a trade. For example, on May 6 there were hundreds of times that a single stock was getting a 1,000 bids per second. Since the originating algorithm knew these were false bids that would never be filled, the implication is that it would have an edge on its clueless competition. Madrigal writes:
Donovan thinks that the odd algorithms are just a way of introducing noise into the works. Other firms have to deal with that noise, but the originating entity can easily filter it out because they know what they did. Perhaps that gives them an advantage of some milliseconds. In the highly competitive and fast HFT world, where even one's physical proximity to a stock exchange matters, market players could be looking for any advantage.
The unfortunate side effect of quote stuffing is that it tends to destabilize the market, presumably as a result of so much "false" information being injected into the system. Worse yet, Nanex found this type of algorithmic spoofing was not just a one-time event that corresponded to the May 6 crash. Apparently, this behavior was (and is) going on systematically. They have uncovered dozens, and perhaps hundreds, of times on any given day when these unexplained quote bursts occur.
The HFT arena is certainly one area in which the technology has evolved so rapidly and with such little transparency and accountability that it seems to threaten the system it was designed to serve. In particular, legacy institutions like the SEC and the exchanges themselves seem to be having a hard time grappling with the consequences of supercomputing and low-latency data feeds. In the Ars Technica article I referred to above, author Jon Stokes notes that even the traders themselves recognize that they're no longer in control:
Informal conversations I've had with money managers and traders indicate that in the wake of the Flash Crash, even the insiders are scared of what the markets have become. Still, they have no choice but to keep trading—not only must they keep trading, but everyone is quietly stepping up their automated trading efforts to avoid getting eaten alive by their competitors' machines. It's a bit like The Sorcerer's Apprentice, except that there's no master magician who can step in and save us in the end.
Posted by Michael Feldman - August 05, 2010 @ 8:30 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.