The post NVIDIA Tesla Matchoff: K40 Versus the K20X appeared first on HPCwire.
]]>Compared to its previous high-end Kepler, the K20X, the NVIDIA Tesla K40 touts more memory, higher clock rates, and more CUDA cores. But how do these specs pay off in terms of actual performance improvements for real-world financial applications? This is what the Xcelerit team wanted to know, so they arranged a face-off between the K40 and the K20X using the Monte-Carlo LIBOR swaption portfolio pricer as the yardstick.
The hardware comparison breakdown is illustrated with this table:
Tesla K20X | Tesla K40 | |
SMX | 14 | 15 |
CUDA Cores | 2,688 | 2,880 |
Memory | 6 GB | 12 GB |
Core Frequency | 732 MHz | 745 MHz |
Max. Frequency | 784 MHz | 875 MHz |
Memory Bandwidth | 250 GB/s | 288 GB/s |
Jörg Lotze, technical lead and co-founder at Xcelerit, explains that aside from the obvious differences in clock speeds, core count and memory, the most significant enhancement to the K40 is a GPU Boost mode that turns up the frequency on those CUDA cores. Up to 17 percent higher frequency is possible as long as the device stays within its specified thermal envelope. Exceeding that limit will cause the clock to be automatically throttled. The K20X only allows a small clock boost of 7 precent.
The benchmark employs Monte-Carlo LIBOR swaption portfolio pricing. This is a common financial algorithm used to price a portfolio of LIBOR swaptions. It involves the simulation of thousands of possible future development paths for the LIBOR interest rate. For each of these paths, the value of the swaption portfolio is computed by applying a portfolio payoff function. Both the final portfolio value and an interest rate sensitivity value are obtained by computing the mean of all per-path values.
For a high number of paths, the algorithm becomes compute bound, creating a scenario where the additional cores and higher clock speeds should create a significant performance boost.
The application was implemented with the Xcelerit software on two systems, each outfitted with dual Intel Xeon E5s and the target GPU.
From the blog:
We measured the computation times for the Monte-Carlo LIBOR swaption portfolio pricer on one GPU of each system, pricing a portfolio of 15 swaptions over 80 time steps and using varying numbers of Monte-Carlo paths. The run time of the full algorithm – including random number generation, data transfers, core computation, and reduction – is compared for single and double precision in the graph below. All these computation steps are running on the GPU, so the difference in the used CPUs does not affect the benchmark results.
With the default clock frequency settings, the K40 returned a speedup of between 1.1 and 1.2 times. When the team tested the application with frequency dialed up all the way, the K40 performance boost was even more pronounced, between 1.2 and 1.25 times higher.
The Xcelerit team created this chart with several notable points of comparison:
Paths | Speedup (def. clock, single) | Speedup (def. clock, double) | Speedup (max. clock, single) | Speedup (max. clock, double) |
16K | 1.15x | 1.17x | 1.21x | 1.21x |
256K | 1.15x | 1.17x | 1.21x | 1.26x |
1024K | 1.15x | 1.18x | 1.22x | 1.28x |
The benchmarking results show that the K40 provides a significant performance improvement for this real-world financial application, up to 1.28x with the higher clock speed enabled. The Xcelerit rep notes that the speedup is pretty constant across number of paths, too, indicating that even small loads benefit from the new GPU. “Together with the doubled memory capacity, this makes a strong case for the Tesla K40 GPU,” he writes.
The post NVIDIA Tesla Matchoff: K40 Versus the K20X appeared first on HPCwire.
]]>The post GPUs Show Big Potential to Speed Pricing Routines at Banks appeared first on HPCwire.
]]>In April, Xcelerit reported on the promising experiment conducted by the Quantitative Risk and Valuation Group (QRVG) at HSBC, which reported more than $2.6 trillion in assets in 2012. The QRVG is responsible for running Credit Value Adjustment (CVA) processes every night over HSBC’s entire portfolio to compute its risk exposure, per Basel III requirements.
Currently, it takes several hours to run the CVA processes on a grid of Intel Xeon processors. Eurico Covas, Head of QRVG Development and Hedge Accounting Systems at HCBC, wanted to see whether it was possible to use GPUs to run this calculation on an intra-day rather than an overnight basis, according to Xcelerit’s blog post.
HSBC has major investments in the code that drives the CVA workload on traditional Intel processors, but its developers lack the CUDA expertise needed to program NVIDIA’s GPUs. “We had heard that Xcelerit offered an easy way to get our existing code to drive GPUs at their maximum speeds,” Covas said in the blog post.
After working on the project for several days, a single developer had identified a promising section of the application that drives the CVA processes to try out on the GPUs. The work of transitioning the code to CUDA involved little more than inserting Xcelerit’s API calls into the code, according to Xcelerit.
Next, the QRVG group set out to test some pricing calculations on the GPUs. In one example, Xcelerit reports that the QRVG took a set of 10,000 swap instruments and priced it for a set of 1,000 Monte-Carlo scenarios at 26 time steps, for a total of 260 million individual calculations.
When the pricing routine was run on a single Tesla K20 GPU, it ran 19 times faster than on a 4-core Intel Xeon E5620 CPU, according to a performance analysis. When run on a system with 3 GPUs, it scaled almost linearly, and completed the work 57 times faster than the Intel CPU.
While it was just a small experiment that involved a small and relatively simple component of the complex CVA processes, it demonstrated a potential for big savings for HSBC. A full set of GPU-enabled systems would cost about $22,000, compared to the $1 million for a 600-CPU grid, according to a Forbes article on the HSBC experiment.
“We are just dipping our toes in the water on this,” Covas says in the Xcelerit blog post. “Our quant teams have a voracious appetite for computing power and obviously using GPUs offers a cost-effective solution to that problem.”
As regulators continue to strengthen banking laws and tighten capital requirement, banks will demand faster and more efficient hardware to run CVA and risk calculations across their holdings. GPU computing, with its ability to not only cut hardware expenditures but help with the electricity bill too, may provide part of the solution.
Related Articles
NVIDIA Shows Off Mobile Variant of Kepler GPU
Saddling Phi for TACC’s Stampede
Emerging Companies Ride Wave of GPU Computing
The post GPUs Show Big Potential to Speed Pricing Routines at Banks appeared first on HPCwire.
]]>