Accelerating Financial Computations on Multicore and Manycore Processors

By Michael D. McCool and Stefanus Du Toit

September 22, 2008

High-performance computation is a necessity in modern finance. In general, the current value of a financial instrument, such as a stock option, can only be estimated through a complex mathematical simulation that weighs the probability of a range of future possible scenarios. Computing the value at risk in a portfolio of such instruments requires running a large number of such simulations, and optimizing a portfolio to maximize return or minimize risk requires even more computation. Finally, these computations need to be run continuously to keep up with constantly changing market data.

Although a large amount of computation is a necessity, doing it efficiently is crucial since financial datacenters are under severe power and cooling constraints. Multicore processors promise improved computational efficiency within a fixed power and cooling budget. However, achieving high efficiency execution on these processors is non-trivial. In the case of finance, new algorithms are constantly being developed by application specialists called quantitative analysts (or “quants”). Time is literally money in finance, and so high-productivity software development is just as important as efficient execution.

In this article, we will discuss high-productivity strategies for developing efficient financial algorithms that can take advantage of multicore processors, including standard x86 processors but also manycore processors such as GPUs and the Cell BE processor. These strategies can lead to one and even two orders of magnitude improvement in performance per processor.

Multicore processors allow for higher performance at the same power level by supporting multiple lightweight processing elements or “cores” per processor chip. Scaling performance by increasing the clock speed of a single processor is inefficient since the power consumed is proportional to (at least) the square of the clock rate. At some point, it is not practical to increase the clock rate further, as the power consumption and cooling requirements would be excessive. The air-cooling limit in particular was reached several years ago, and clock rates are now on a plateau. In fact, clock rates on individual cores have been decreasing slightly as processor vendors have backed away from the ragged edge in order to improve power efficiency. However, achievable transistor density is still increasing exponentially, following Moore’s Law. This is now translating into an exponentially growing number of cores on each processor chip.

Processors from Intel and AMD supporting the x86 instruction set are now available with four cores, but six and eight core processors are expected soon. Manycore processors such as GPUs and the Cell BE can support significantly more cores, from eight to more than sixteen. In addition, in modern multicore processors each core also supports vector processing, where one instruction can operate on a short array (vector) of data. This is another efficient way to increase performance via parallelism. Vector lengths can vary significantly, with current x86 processors and the Cell BE supporting four-way vectors and GPUs supporting anywhere from five to thirty-two. Vector lengths are also set to increase significantly on x86 processors, with the upcoming Intel AVX instruction set supporting 8-way vectors and the Intel Larrabee architecture supporting 16-way vectors.

Developing software for multicore vectorized processors requires fine-grained parallel programming. A fine-grained approach is needed because the product of the number of cores and the vector length in each core, which defines the number of numerical computations that can be performed in each clock cycle, can easily be in the hundreds. The other difference between modern multicore processors and past multi-processor parallel computers is that all the cores on a multicore processor must share a finite off-chip bandwidth. In order to achieve significant scalability on multicore processors, optimizing the use of this limited resource is absolutely necessary. In fact, in order to hide the latency of memory access it may be necessary to expose and exploit even more algorithmic parallelism, so one part of a computation can proceed while another is waiting for data.

The financial community has significant experience with parallel computing in the form of MPI and other cluster workload distribution frameworks. However, MPI in particular is too heavyweight for the lightweight processing elements in multicore processors (not to mention manycore processors) and cannot, by itself, optimize memory usage or take advantage of the performance opportunities made available through vectorization. Some alternative strategies are needed to get the maximum performance out of multicore processors.

We will now discuss financial workloads. Option pricing is one of the most fundamental operations in financial analytics workloads. More generally, the current value of an “instrument,” of which an option is one example, needs to be evaluated through probabilistic forecasting.

Monte Carlo methods are often used to estimate the current value of such instruments in the face of uncertainty. In a Monte Carlo simulation, random numbers are used to generate a large set of future scenarios. Each instrument can then be priced under each given future scenario, the value discounted back to the current time using an interest calculation (made complicated by the fact that interest rates can also vary with time), and the results averaged (weighted by the probability of the scenario) to estimate the current value.

Simple versions of Monte Carlo seem to be trivially parallelizable, since each simulation can run independently of any other. However, even “simple” Monte Carlo simulations have complications. First, high-quality random numbers need to be generated and we must ensure that each batch of parallel work gets a unique set of independent, high-quality random numbers. This is harder than it sounds. The currently accepted pseudo-random number generators such as Mersenne Twister are intrinsically sequential algorithms, and may involve hundreds of bytes of state.

Typically a lookup table of starting states needs to be generated so that the random number sequence can be restarted at different points in a parallel computation. Since restarting the state of a random number generator is significantly more expensive than stepping serially to the next value, in practice the parallelism is done over “batches” of Monte Carlo experiments, with each batch using a serial subsequence of the random number generator’s output. The size of the batch should be tuned to match the amount of local memory and number of cores in the processor. Also, despite the name, random number generators need to be deterministic and repeatable. For various reasons (including validation, legal and institutional), pricing algorithms need to give the same answer every time they run. Given these issues, some infrastructure that supports parallel random number generation in a consistent way is essential.

The last step in Monte Carlo algorithms can also be troublesome: averaging. First, high precision is often needed here. In practice, the results of millions of Monte Carlo experiments need to be combined. Unfortunately, sum of more than a million numbers cannot easily be done reliably using only single precision, since single precision numbers themselves only have about six to seven digits of precision. Fortunately, manycore accelerators have recently added double-precision capabilities. Second, different strategies for doing the summation, a form of what is often called “reduction,” are possible by exploiting the associativity of the addition operation. There is no single strategy of parallelism for reduction that is optimal for all processors. As with random number generation, in order to make an implementation portable it is useful if reduction operations are abstracted and done by a parallel runtime platform or framework.

Not all Monte Carlo simulations are “simple.” More sophisticated examples manipulate data structures to allow the reuse of results, or use “particle filters” to iteratively focus computation on more important parts of the search space in order to improve accuracy. Simple Monte Carlo simulations often scale very well because they use relatively little memory bandwidth. More sophisticated versions that reuse results via data structures may not scale as well unless care is taken to ensure that memory access does not become a bottleneck. Reuse of results and theoretical improvements in convergence rates need to be weighed against the reduced efficiency of more complex algorithms. However, with some care taken to ensure that the data locality present in a complex algorithm is properly exploited, good scalability is possible even for algorithms with a lot of data reuse and communication.

In order to achieve significant performance improvement on multicore processors, two things are needed: efficient use of low-level operations such as vector instructions, and second, an appropriate choice of parallelization and data decomposition strategy. The latter is obviously important, but how can it be achieved without interfering with the former, or vice-versa? The solution is to use a meta-strategy based on code generation. The dataflow pattern gives the decomposition strategy, and this is managed by one level of abstraction. After the computation has been laid out, it can be optimized for a particular set of low-level operations using a second stage of compilation.

Fortunately, good decomposition strategies can be designed for a relatively small number of recurring patterns. We’d like to figure out how to implement these patterns once, encapsulate them, and then reuse them for all occurrences of the pattern. The trick is to abstract the strategies for dealing with these patterns without introducing additional runtime overhead. Staged code generation accomplishes this. First, a high-level program serves as scaffolding for describing the dataflow of the computation, but is not involved in the actual execution. Instead, the scaffolding only serves to collect the computation into components and organize it for vectorization. Once each component is collected, a second stage of code generation can be used to perform low-level optimizations. This strategy is simpler to implement than it sounds, given the support of a suitable software development platform.

Multicore and manycore processors provide many opportunities for increased performance and greater efficiency. However, actually obtaining good scalability on any multicore processor requires both a fine-grained parallelization strategy and a dataflow design that optimizes memory usage. Memory bandwidth in particular is a limiting resource in multicore processors. Using a high-level framework, it is possible to abstract patterns of dataflow and strategies for dealing with them so they can be used efficiently, while still maintaining processor independence.

About the Authors

Dr. Michael McCool is chief scientist and co-founder of RapidMind and an associate professor at the University of Waterloo. He continues to perform research within the Computer Graphics Lab at the University of Waterloo. Professor McCool has a diverse set of published papers, and his research interests include high-quality real-time rendering, global and local illumination, hardware algorithms, parallel computing, reconfigurable computing, interval and Monte Carlo methods and applications, end-user programming and metaprogramming, image and signal processing, and sampling. He has degrees in Computer Engineering and Computer Science.

Stefanus Du Toit is chief architect and co-founder of RapidMind, and has led the development and evolution of the RapidMind platform since 2003. Stefanus has extensive experience in the areas of graphics, GPGPU, systems programming and compilers. He holds a Bachelors of Mathematics degree in Computer Science.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire