Visit additional Tabor Communication Publications
August 23, 2010
High frequency trading (HFT), often called algorithmic or low latency trading, relies on fast computers and even faster networks to execute trades in sub-second and even sub-millisecond timeframes. It has generated massive profits for those firms skilled enough to handle the complexities of the software and hardware.
As such, it has become the dominant method for equity trading in the US, but it's popularity is expanding worldwide, especially Asia. HPCwire got the opportunity to ask Chuck Chon, chief technology officer of SBI Japannext, about the HFT business in Japan and to talk about some of the technology behind it.
HPCwire: We've heard a lot about the prevalence of high frequency trading (HFT) in exchanges. Some analysts put the share of HFT trades at 70 percent. What's the HFT landscape like in Japan?
Chuck Chon: I am using 2009 data. If we were to take the top 30 players out of the 107 TSE members total market share, it is around 84.2 percent. Out of these 30 players, there are 14 foreign brokerages houses and 4 mega-Japanese brokerage houses. If we were to add up the market shares of these 14 foreign brokerages and 4 Japanese mega-brokerage houses, we get 65.1 percent of the total market shares. We can safely assume that these 18 firms in Japan are engaged in some form of HFT type of proprietary trading strategies or have clients who actively trade using HFT strategies. Therefore, based on the assumptions that I have made, without going into further details, I would say it is perhaps over 50 percent already. The official TSE number for HFT trades is 30 percent in Japan.
HPCwire: What's driving the high frequency trading market?
Chon: One of my former colleagues said to me, "If you do not have it -- HFT -- then you are the only one without it. So you need to have it just to compete." Are they all profitable? I do not think so. I believe competition for an extra edge in the market is fierce and spreads are razor thin. HFT guys need to augment their profits by constantly trying to find the inefficiencies in the market. That includes alternative trading venues like SBI Japannext that can offer an extra edge through significantly smaller tick sizes compared to Tokyo Stock Exchange.
HPCwire: Can you describe what a typical HFT infrastructure looks like today?
Chon: I believe HFT infrastructures have gotten much simpler for a few simple reasons. First, collocation rack space cost is extremely high so the aim is to reduce the footprint in collocation rack spaces as much as possible. Second, they will eliminate any unnecessary hardware equipment and software applications that get in the way of latency. I guess a good analogy would be comparable to building a racing car.
HPCwire: What elements of the infrastructure are HFT firms focusing on to get a technological edge over their competition?
Chon: In HFT trading, fill ratio is a critically important parameter. The first to discover the inefficiencies in prices and the first to capture the inefficiencies obtains the edge in trading. HFT guys will go an extra mile to get there. Therefore, I believe their focus will evolve around latency busting technologies.
As an example, most likely all critical applications will be running in memory. If processes need to communicate outside the boxes, they will most likely opt for multicast instead of TCP. If they need to communicate via WAN link, they will most likely opt for low latency carriers and invest in high end network equipment to cut down on device latencies.
HPCwire: What does it currently take, money-wise, to get into the high frequency trading business? Is it just for big financial institutions with deep pockets?
Chon: My area of involvement for the past 20 years was exclusively around index arbitrage HFT trading strategies. Without going into too much detail, the fund size was in the billions of dollars and funding cost was near the LIBOR rate. The point here is that size matters in this business.
HPCwire: How do you see HFT systems evolving over the next several years? Will it be all about reducing latency or are there other areas that you think will become more critical?
Chon: As long as technologies continue to improve and even a micro second can be shaved, the hard core HFT guys will most likely go after it. I believe HFT systems will evolve until they achieve theoretical zero latency. As for the other critical area of development going forward, it may be in SOR technologies to effectively deal with proliferation of various types of liquidity pools and fragmentation of the market.
SBI Japannext, the largest and most well established alternative trading venue, was responsible for triggering the SOR race among the global brokerage houses in Japan. SOR has now become, "If you don't have it -- SOR -- then you are the only one without it. So, you must have it in order to compete."
HPCwire: A lot of criticism has been leveled at HFT -- at least in the US. Some believe it can greatly magnify market volatility, creates unfair competition for traditional investors, and tips the balance of market activity from long-term investing to speculation. What's your perspective?
Chon: I am sure there are others who are more qualified than I am in commenting on the effect that HFT has had on the market. From my perspective, the HFT business I was involved in, we got paid to provide the liquidity to market when the market needed liquidity and we got paid to take the liquidity when the market needed us to take the liquidity. We did it well and we did it abiding by the trading rules mandated by exchanges and compliance rules outlined by regulatory institutions. We took pride in assisting the market to become more efficient.
I believe HFT gets a bum rap because of its highly-proprietary nature of the business and lack of transparencies on how the strategies impact the market. I am quite certain if HFT existed during the 1920s, HFT guys would have gotten a bum rap for triggering the Great Depression. Having said that, if the critics of HFT were able to look under the hood of a typical HFT strategy, they may be surprised to find that it is perhaps based on some simple concepts -- no different from what normal investors would do manually.
Chuck Chon, CTO at SBI Japannext will be speaking at the Trading Architecture Asia 2010 conference, being held August 31st — September 2nd. HPCwire is a proud media partner of this event.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.