STAC Floats ML Benchmark for Financial Services Workloads

By John Russell

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performance of machine learning and deep learning workflows for financial applications across systems. The new report, Toward business-driven ML benchmarks: An NLP example, examined performance on different Google cloud instances.

“This study was designed to illustrate how STAC Benchmarks for machine learning (ML) can be constructed and used. It is also intended to help data scientists and data engineers know what to expect when using the data science tools and cloud products of this project and how to avoid common pitfalls. The workload is topic modeling of SEC Form 10-K filings using Latent Dirichlet Allocation (LDA), a form of natural language processing (NLP),” according to the report.

STAC used the workload to explore the question of scale-up versus scale-out in a cloud environment on three SUTs:

  • A single Google Cloud Platform (GCP) n1-standard-16 instance with Skylake and RHEL 7.6
  • A single GCP n1-standard-96 instance with Skylake and RHEL 7.6.
  • A Google Cloud Dataproc (Spark as a service) cluster containing 13 x n1-standard-16 Skylake instances (1 master and 12 worker nodes) and Debian Linux 8.”

STAC’s foray into ML/DL benchmarking was presented with both caution and ambition: “While we hope these results are informative, it is important to understand what they are not. They are not competitive benchmark results of the sort readers are accustomed to finding in STAC Reports. No vendors contributed to optimization of the SUTs, so we can be fairly certain that they don’t represent the best possible results. As soon as the [STAC] Council adopts these or other benchmark specifications for ML, the competitive benchmark numbers will begin to flow.”

Extracting useful information from various sources – regulatory filings, company reports, news, etc. – has a long history in financial services. Recently various AI approaches have increasingly been pressed into service. The latest report notes the challenge ML presents:

 “…There are dozens upon dozens of ML algorithms; at least ten ML frameworks or libraries with implementations of those algorithms; nearly two dozen processor architectures vying for ML workloads (yes, you read that right); infrastructure-as-a-service and machine-learning-as-a-service offerings from all the major cloud providers; and countless software and software-as-a-service providers promising to simplify, accelerate, or otherwise enhance machine learning workflows. Data scientists and the technologists that support them face a tyranny of choice.

“The mission of the STAC Benchmark Council is to fight such tyranny. The Council develops benchmark standards that are based on real world use cases and that measure things that matter to a business (in the case of machine learning, those are primarily time to market, cost, and model quality, as discussed later in this report). This enables customers, vendors, and STAC to make apples-to-apples comparisons of techniques and technologies, thus making architectural and product choices easier for customers. It also gives the vendor community use cases developed by multiple customers (like a multi-customer POC) on which they can focus product development.”

The full study is available to STAC members however the STAC Study – Excerpts is freely available for download after registering and is fascinating. Issues around measuring performance, cost, and quality are tackled. Google (cloud resources) and Intel (funding) helped support this project. Presented below are snippets of the material contained in the excerpts report.

STAC compared performance on three instances (details below). “We defined three dataset sizes, as shown in Table 2. The first, 1/3 of a year, represents the sort of small subset that a quant might use for quick and dirty modeling before initiating a search on the full dataset of interest. The largest dataset size in this project was 3 years. This is a realistic size with manageable costs and time requirements for a benchmark project. In practice a firm may want to use substantially more, perhaps 10 or even 20 years, or perhaps compute models for a rolling 3-year window over a 10- or 20-year interval. Most firms will run this kind of workload many times, which raises the stakes.”

Figure 1. A benchmark of the complete business problem would extend from raw data all the way through to simulated P&L. We hope it is obvious why that would be too large a scope for an initial project (and probably too large for any useful benchmark.) So the question was which parts to focus on. Source: STAC Excerpts Report derived from STAC report Toward business-driven ML benchmarks: An NLP example

All three solutions used the same analytics software stack: Python 3.5; Python 3 library spaCy 2.0.12; Python 3 library Scikit-learn 0.20.0; Intel Python 3 library MKL 2018.0.3; Python 3 library Joblib 0.12.3.

To support this, two of the SUTs provided infrastructure as a service, and one provided Spark as a service. STAC described the Google instance configurations as follows:

“v16 – A single cloud instance representative of where a user might start when looking for something bigger than a laptop at a reasonable cost, in the absence of knowledge about how the workload scales. Configuration:

  • Google Compute Engine n1-standard-16 (16 vCPUs, 60 GB memory)
  • CPU platform Intel Skylake or better
  • 20 GB Google Persistent Disk as boot disk
  • 1 TB Google Persistent Disk mounted read-only as data disk
  • Red Hat Enterprise Linux 7.6

“v96 – A single cloud instance with the most vCPUs currently available. The point was to see how well the 
workload “scaled up” without the complexity of multiple nodes. Configuration:

  • Google Compute Engine n1-standard-96 (96 vCPUs, 360 GB memory)
  • CPU platform Intel Skylake or better
  • 20 GB Google Persistent Disk as boot disk
  • 1 TB Google Persistent Disk mounted read-only as data disk
  • Red Hat Enterprise Linux 7.6

“DP-v192 – Google Cloud Dataproc (Spark as a service), using multiple nodes to double the number of cores versus the v96, with autoscaling enabled in order to limit the cost of under-utilized cores. This SUT used Dataproc simply to get access to more cores on which to run a Python script. This way we only had to write a Spark wrapper around exactly the same code as we ran on the single instances. This is a common transition path for data scientists initially trying to scale out in the cloud, but since it is neither Spark- nor cloud-native, it probably doesn’t represent optimal use of the platform. Configuration:

  • Google Dataproc image 1.2.22 with autoscaling (alpha) and minimum CPU platform = Skylake (beta)
  • Debian 8
  • 13 x Google Compute Engine n1-standard-16 (16 vCPUs, 60 GB memory). One master nodes plus 12 worker nodes.
  • 60 GB Google Persistent Disk as boot disk for each node
  • Google Cloud Storage for the input datasets and persisted results”

As you can see the results were interesting.

Table 3 shows the total elapsed time and the average cost per modeling experiment for each work set on each SUT. “For v16, we did not run the largest work set (216 experiments on 3 years of data) because the second largest (108 experiments on 3 years of data) took more than 15 hours, meaning the larger work set would take longer than a day. We arbitrarily considered the data scientist’s tolerance for elapsed time to be “overnight”, which is roughly 16 hours. At least that was our tolerance.” Source: STAC

The report also noted Google Cloud Dataproc utilized its autoscaling feature and that because that feature was still in alpha status, by STAC policy did not make the results public but included them in the full study.

STAC offered these additional observations:

  • “While it’s easy to assume that one can accelerate a workload by throwing more cores at it, this isn’t always true. In fact, this study highlights a few cases where trying to exploit additional cores slowed a workload down.
  • “For a given code base and processor type, there is a lower bound of elapsed time that cannot be overcome by scaling up or out. Individual experiments in this implementation were not able to utilize more than one vCPU. Thus, even with a surplus of cores and no platform overhead, the elapsed time for each work set is gated by its longest-running experiment. The only way to shrink that time is to improve performance per vCPU through faster code or a faster processor.
  • “As documented in this study, v96 is preferable for some workloads while v16 is preferable for others, depending on the user firm’s priorities (operating cost vs data scientist cost vs time to market). Fortunately, the fact that Google Persistent Disk makes it possible to fire up any type of instance and access the same data as other instances makes it convenient to mix and match instance types according to the task at hand.”

It will be interesting to monitor how the STAC community responds, how the exploratory benchmark evolves, and when vendors start using the STAC ML benchmark on their systems. There are, of course, many tests being used to assess AI capabilities of systems. One new effort – the MLPerf benchmark suite for assessing training and inference performance introduced last May – has attracted considerable support and recently released its first round of results (see HPCwire article, Nvidia Leads Alpha MLPerf Benchmarking Round.) Another is aimed at large and leadership class systems, (see HPCwire article, The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning).

The STAC report offers the following assessment of its effort: “We think this [initial] implementation is good enough to yield technology comparisons that can be applied to the real world. While the implementation is constructed from mostly publicly available references and is perhaps not exactly what a firm would deploy (for example a firm might highly customize the preprocessing stage of the pipeline), we believe the algorithm is sufficiently representative of the real world with respect to performance and quality to make it a useful instrument to inform real algorithmic and architectural choices. We also think it is simple enough that STAC members (users and vendors) will be able to analyze and optimize its performance, as well as introduce new libraries and techniques, without a huge effort.”

Link to STAC report: https://stacresearch.com/topic_modeling_1

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefor Read more…

By Oliver Peckham

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight Gary Patton, GlobalFoundries’ CTO and R&D SVP as well a Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth required on some problems and D-Wave struck a deal with NEC to coll Read more…

By John Russell

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth require Read more…

By John Russell

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
DDN
DDN
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This