Credit Modeling with Supercomputing

By Bill Blake

September 14, 2007

The creation and deployment of new numerical methods for economic and financial modeling is becoming a critical competitive weapon for banks, hedge funds and other investment firms. From a high performance computing (HPC) perspective, when quantitative analysts are asked how fast these new computations need to be processed, their answer is usually “at least fifteen minutes faster than our competitors with a lot of extra credit for finishing before the close of daily trading.” Consequently, computing requirements on Wall Street are growing exponentially as algorithms and models become more complex to support new investment opportunities, while incorporating ever larger data sets.

But the desktop computers that are used to develop these ever-growing financial codes are inadequate to support full scale production deployment, prompting investments firms to turn to HPC systems such as parallel servers, clusters or grids. Fortunately these systems now employ cost-effective multi-core processors from Intel and AMD, and as a result parallel supercomputing is finally accessible to Wall Street firms.

The problem is the new parallel hardware is like a fast highway that leads to a software wall. Parallel HPCs are unfamiliar computing platforms to most financial analysts, who are accustomed to working with popular mathematical tools such as MATLAB, Python and R to produce their financial models. Computing these models on parallel systems typically requires a team of highly trained programmers to rewrite hundreds of lines of analyst-written VHLL code into thousands of lines of complex C, C++, or FORTRAN involving the Message Passing Interface (MPI) or equivalent manual parallelization techniques. This redesign can take months of time while preventing the interactive experimentation and refinement of the models financial analysts require. These complex coding techniques require programmers versed in parallel programming. And programmers with those skills are not only expensive, they are in short supply in the financial services sector.

Consider the case of Julius Finance, a Wall Street research company that specializes in credit modeling analysis. The company focuses on credit derivative products, analyzing the relative valuation of synthetic collateralized debt obligations (CDOs). The computationally challenging analysis of credit factors such as spread, credit rating, foreign exchange and interest rates for a wide variety of corporate investments has made the credit derivative market a black art at best.

Until now, these financial products have been priced by investment firms using Copula models, a popular approach for modeling dependencies between random variables thanks to their relative mathematical simplicity. But the trade-off for this simplicity has been inconsistent, unconvincing results. “Existing mathematical frameworks for CDO valuation are far from compelling…to put it mildly,” says Peter Cotton, CEO of Julius Finance. “This is not surprising, as rigorous evaluation of credit models is prohibitively time consuming in any conventional research setup.”

Cotton knew that employing new, more sophisticated algorithmic models on massive amounts of variable financial data would give the company a tremendous competitive advantage when it came to making more accurate predictions about a portfolio’s potential.

The company installed a Linux-based cluster to provide the necessary processing power and memory capacity. But rather than employ computer scientists to parallelize the models, Julius Finance took a different approach, using Star-P software to transparently bridge analysts’ desktops with the Linux cluster. This way, the company’s analysts could continue to work in their familiar MATLAB environment, and enable their applications to run on the parallel clusters without reprogramming.

This interactive supercomputing approach allows for continual feedback and refinement from prototype to production, resulting in higher quality models, algorithms and, ultimately, much more accurate portfolio predictions. The company gained a quantum leap in computational performance to handle the massive data sets and model complexities, without having to lose the interactivity and ease of use of their desktop environment. “We took this approach to reduce prototyping time and facilitate memory intensive experiments…looking under more rocks, as it were, and finding very interesting things,” says Cotton.

Julius Finance is part of a growing trend on Wall Street establishing HPCs as a critical resource in the IT data center. The reason: as financial applications become more complex and more compute-intensive, the ability to offer real-time results diminishes with desktop-bound computing. And the big challenge on Wall Street is in providing actionable financial analysis before the window of opportunity closes. Shrinking the “time to solution at full scale” can offer tremendous competitive differentiation to investment firms.

Beyond the specific area of credit modeling, speeding up computations and scenario analyses is critical to all aspects of  financial services — including trading desks, risk management desks, etc. — because each component, while a relatively small part of the overall environment, is potentially computationally expensive. Whether the reactions are in nearly real time, on an hourly basis, or at the end of the day, the decisions could often be improved if they included more trajectories, more scenarios.  The models are dynamic — with frequent updates with new parameters — so flexibility in algorithm development and production deployment is key.

This new interactive supercomputing model can be generalized to a variety of financial analytical applications ranging from numerically-intensive workloads in simulation, optimization and valuation to data-intensive workloads performing pattern detection for fraud detection and trading opportunities, such as:

Monte Carlo Simulation – These simulations have many advantages, including the ease of implementation and applicability to multi-dimensional problems commonly encountered in finance.  However, calculation using Monte Carlo techniques is very time consuming due to the need for simulating many trajectories with multiple parameters. 

Portfolio Optimization — Taking an interactive supercomputing approach, analysts can run their models on parallel systems to optimize thousands of individual portfolios overnight based on the previous day’s trading results. Commercial and open source optimization libraries such as Axioma or CPLEX can typically be plugged in and executed in parallel – all from within the analyst’s desktop application.

Valuation of Financial Derivatives — Valuing financial derivatives is computationally intensive, and requires large amounts of computer time. A re-insurance firm, for example, may need to value and compute hedge strategies for hundreds of thousands of policy holders in its portfolio on a regular and timely basis. Analysts need to be able to explore new valuation methodologies from their desktop, using high performance computers to run billions of complex scenarios.

Detection of Credit Card Fraud — The rise of identity theft together with the popularity of online shopping has resulted in a huge increase in credit card fraud. As thieves become increasingly shrewd in exploiting security weaknesses, banks and credit card companies need to be extremely agile to stay ahead of them. Parallel HPCs enable a bank to easily run more sophisticated fraud detection algorithms against tens of millions of credit card accounts.

Hedge Fund Trading — In balancing a large portfolio of stocks, analysts need to search for short- and long-term patterns, identify correlations between securities, and develop forecasts. Intense computations are required against terabyte-sized “tick store” databases — potentially a decade or more of trading data for thousands of securities. HPCs allow for faster reaction time to market conditions, enabling analysts to evaluate more sophisticated algorithms that take into account larger data sets.

Until now, investment firms faced an “either-or” dilemma by choosing to live with the performance limitations of their desktop systems, or engaging a team of programmers to re-code their algorithms to take advantage of powerful parallel servers or clusters. But that situation changes with new interactive supercomputing models that provide a “both-and” opportunity combining the productivity breakthroughs of easy to use desktop development with a seamless transition to deployment of large, complex financial simulations on parallel servers. Analysts can focus on rapidly delivering the most accurate, comprehensive and actionable intelligence by leveraging abundant parallel system resources without the need for scarce human resources.

—–

About the Author

Bill Blake is the Chief Executive Officer of Interactive Supercomputing Inc. He brings more than two decades of senior executive experience in developing high performance computing systems. He joins ISC from Netezza, where he was senior vice president of product development for the high-performance data warehouse appliance company. Bill previously was vice president of high performance technical computing at Compaq, where he led development and marketing efforts. He received undergraduate and graduate degrees in Electrical Engineering at the Lowell Technological Institute, and is a member of the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM), and the American Association of Artificial Intelligence. Bill is a member of the board of directors of supercomputing pioneer, Cray Inc. Supercomputers, as well as Etnus, Inc., a provider of analytical software for developing complex computer code.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, remain in first and second place. The only new entrants in t Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX-1 compute power in an air conditioned, water-cooled ScaleMa Read more…

By Doug Black

HPE and NREL Collaborate on AI Ops to Accelerate Exascale Efficiency and Resilience

November 18, 2019

The ever-expanding complexity of high-performance computing continues to elevate the concerns posed by massive energy consumption and increasing points of failure. Now, the AI Ops collaboration between Hewlett Packard En Read more…

By Oliver Peckham

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This