Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

By Doug Black

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, which has announced a 6,250x acceleration to the STAC-A3 “parameter sweep” benchmark.

Using a Nvidia DGX-2 system (in its standard configuration) to run accelerated Python libraries, Nvidia said in one case the system ran 20 million STAC-A3 simulations on a basket of 50 financial instruments in a 60-minute period, breaking the previous record of 3,200 simulations.

The results have been validated by the Securities Technology Analysis Center (STAC), whose international membership includes more than 390 banks, hedge funds and financial services technology companies. In a pre-announcement media briefing, STAC Director Peter Lankford said that in an exercise using 48 instruments, increasing the number of simulations from 1,000 to 10,000 only added 346 milliseconds, “suggesting that a quant can significantly expand the parameter space without significant [time] cost using this platform.”

Backtesting is a way to assess the viability of an algorithmic trading strategy by feeding a model historical data to see it if would have predicted the real-world results. “Exploring more combinations of parameters in an algorithm can lead to more optimized models and thus more profitable strategies,” said Michel Debiche, a former Wall Street quantitative analyst who is now STAC’s director of analytics research.

Financial trading algorithms make up about 90 percent of public trading today, according to the Global Algorithmic Trading Market 2016–2020 report, and quants now control about a third of all trading on the U.S. stock markets, according to the Wall Street Journal.

“The workload in this case is a big data and big compute kind of workload,” Lankford said. “…a great deal of the trading…these days is automated, using robots, that’s true on the trading side and increasingly so on the investment side. A consequence of that competition is that there is a lot of pressure on firms to come up with clever algorithms for those robots, and the half-life of a given trading strategy gets shorter all the time. So a firm will come out with a strategy and make money with it for a while, and then the rest of the market catches on or counteracts it, and the firm has to go back to the drawing board. So this is about the drawing board.”

Beyond the throughput power of its GPUs, Nvidia attributed the benchmark record to advancements in its software, specifically around Python, to help reduce GPU programming complexity. The benchmark results were achieved with 16 Nvidia V100 GPUs in a DGX-2 system (along with two Intel Xeon Platinum 8168 processors, and 30TB of NVME SSDs) and Python using Nvidia CUDA-X AI software and Nvidia RAPIDS, software libraries designed to simplify GPU acceleration of common Python data science tasks. Also included in the software stack: Numba, an open-source compiler that translates a subset of Python into machine code, allowing data scientists to write Python compiled into the GPU’s native CUDA and extending the capabilities of RAPIDS, according to Nvidia.

Director of global financial services strategy at Nvidia John Ashley said that while Nvidia has worked for several years with hedge funds on backtesting simulation in C/C++, the work Nvidia is doing in Python and the DGX-2 lets Nvidia use “our flagship deep learning server optimized for deep learning training, optimized for this kind of hyper-parameter tuning.”

Source: STAC (SUT ID: NVDA190425)

“The key point is we’re able to do this in Python,” said Ashley. “We could have done this at almost any time with CUDA, but Python makes this accessible to a huge community of data scientists who aren’t comfortable in C++, who don’t feel maximally productive writing their algoriths in C, but who are used to day-in day-out working in Python. And because of our investments in AI and under the RAPIDS umbrella in machine learning, and specifically in working with open source technologies like the Apache Arrow Project on the CUDA dataframe, that is an open source way to leverage this with the Python environment…

“That’s really the driver for now. We’re on a journey at Nvidia around accelerating data science in general and the open source libraries have gotten to the point where we can do the whole thing in Python.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of that at the upcoming ISC High Performance 2024, which is hap Read more…

Processor Security: Taking the Wong Path

May 9, 2024

More research at UC San Diego revealed yet another side-channel attack on x86_64 processors. The research identified a new vulnerability that allows precise control of conditional branch prediction in modern processors.� Read more…

The Ultimate 2024 Winter Class Round-Up

May 8, 2024

To make navigating easier, we have compiled a collection of all the 2024 Winter Classic News in this single page round-up. Meet The Teams   Introducing Team Lobo This is the other team from University of New Mex Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have become the backbone of devices with an on/off switch. Thes Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. According to the reports, photonics quantum computer developer PsiQu Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

A Big Memory Nvidia GH200 Next to Your Desk: Closer Than You Think

February 22, 2024

Students of the microprocessor may recall that the original 8086/8088 processors did not have floating point units. The motherboard often had an extra socket fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire