Visit additional Tabor Communication Publications
July 27, 2009
A controversy is brewing regarding one of the most lucrative HPC applications ever invented: high frequency trading. Also know as algorithmic trading, high frequency trading (HFT) is the process by which computers are used to execute trading orders at extremely low latencies -- on the order of milliseconds. The speed of the HPC machines and the associated communication technologies makes this all possible.
According to the TABB Group, HFT accounts for 73 percent of all equity trading volume in the US and those trades are executed by only about 2 percent of the trading firms operating in the country. At more than $21 billion annually, HFT-generated revenue is on par with that of commercial gambling.
In fact, in some ways HFT operates like a low-latency casino, and not a particularly honest one at that. In a recent New York Times article, it was pointed out that HFT players are using special access to data streams and high powered technology as a way to game the system. Retail and institutional investors without this type of setup are at disadvantage. From the article:
High-frequency traders often confound other investors by issuing and then canceling orders almost simultaneously. Loopholes in market rules give high-speed investors an early glance at how others are trading. And their computers can essentially bully slower investors into giving up profits — and then disappear before anyone even knows they were there.
The "loopholes" are something called flash orders, whereby bids that are not immediately filled are "flashed" to a proprietary data feed that participants can buy into. Once there, the computers can do their work out of the public eye, taking advantage of the private information to know when to buy and sell at the most opportune price points. For example, the algorithm can buy low as it sees demand building, and sell high as demand grows (which it has helped fuel). If mere mortals were doing this, it would be called front running, which happens to be illegal.
Algorithmic-driven transactions that buy low and sell high often yield just pennies per share on individual trades. But over billions of trades this can add up. Another New York Times piece pointed out the major problem with the flash scheme:
Although anyone can gain access to flash orders by paying a fee, they are useful only to traders who have computers powerful enough to act on the data within milliseconds. In recent years, some of the largest financial companies, including Goldman Sachs, have earned enormous profits with such computers, which are very expensive and often housed right next to the machines that power the marketplaces themselves.
In a recent interview (video) on Bloomberg, Former NASDAQ chairman Alfred Berkeley noted that HFT players are not looking for long-term investments, but rather for "temporary imbalances in supply and demand," which can be exploited in very short time frames. At one point, Berkeley characterized the HFT players as "scalpers," yet he maintains that they serve a real purpose in maintaining market liquidity. According to him the real question is if HFT is putting the average investor at a disadvantage. His take is that the current structure is probably skewed toward speculation rather than investing, and needs to be rebalanced.
For more of a point-counterpoint view of the topic, I've posted a recent CNBC segment with Joe Saluzzi of Themis Trading and Irene Altridge Partner at Able Alpha Trading. Pretty interesting exchange.
Posted by Michael Feldman - July 27, 2009 @ 5:20 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.