STAC Floats ML Benchmark for Financial Services Workloads

By John Russell

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performance of machine learning and deep learning workflows for financial applications across systems. The new report, Toward business-driven ML benchmarks: An NLP example, examined performance on different Google cloud instances.

“This study was designed to illustrate how STAC Benchmarks for machine learning (ML) can be constructed and used. It is also intended to help data scientists and data engineers know what to expect when using the data science tools and cloud products of this project and how to avoid common pitfalls. The workload is topic modeling of SEC Form 10-K filings using Latent Dirichlet Allocation (LDA), a form of natural language processing (NLP),” according to the report.

STAC used the workload to explore the question of scale-up versus scale-out in a cloud environment on three SUTs:

  • A single Google Cloud Platform (GCP) n1-standard-16 instance with Skylake and RHEL 7.6
  • A single GCP n1-standard-96 instance with Skylake and RHEL 7.6.
  • A Google Cloud Dataproc (Spark as a service) cluster containing 13 x n1-standard-16 Skylake instances (1 master and 12 worker nodes) and Debian Linux 8.”

STAC’s foray into ML/DL benchmarking was presented with both caution and ambition: “While we hope these results are informative, it is important to understand what they are not. They are not competitive benchmark results of the sort readers are accustomed to finding in STAC Reports. No vendors contributed to optimization of the SUTs, so we can be fairly certain that they don’t represent the best possible results. As soon as the [STAC] Council adopts these or other benchmark specifications for ML, the competitive benchmark numbers will begin to flow.”

Extracting useful information from various sources – regulatory filings, company reports, news, etc. – has a long history in financial services. Recently various AI approaches have increasingly been pressed into service. The latest report notes the challenge ML presents:

 “…There are dozens upon dozens of ML algorithms; at least ten ML frameworks or libraries with implementations of those algorithms; nearly two dozen processor architectures vying for ML workloads (yes, you read that right); infrastructure-as-a-service and machine-learning-as-a-service offerings from all the major cloud providers; and countless software and software-as-a-service providers promising to simplify, accelerate, or otherwise enhance machine learning workflows. Data scientists and the technologists that support them face a tyranny of choice.

“The mission of the STAC Benchmark Council is to fight such tyranny. The Council develops benchmark standards that are based on real world use cases and that measure things that matter to a business (in the case of machine learning, those are primarily time to market, cost, and model quality, as discussed later in this report). This enables customers, vendors, and STAC to make apples-to-apples comparisons of techniques and technologies, thus making architectural and product choices easier for customers. It also gives the vendor community use cases developed by multiple customers (like a multi-customer POC) on which they can focus product development.”

The full study is available to STAC members however the STAC Study – Excerpts is freely available for download after registering and is fascinating. Issues around measuring performance, cost, and quality are tackled. Google (cloud resources) and Intel (funding) helped support this project. Presented below are snippets of the material contained in the excerpts report.

STAC compared performance on three instances (details below). “We defined three dataset sizes, as shown in Table 2. The first, 1/3 of a year, represents the sort of small subset that a quant might use for quick and dirty modeling before initiating a search on the full dataset of interest. The largest dataset size in this project was 3 years. This is a realistic size with manageable costs and time requirements for a benchmark project. In practice a firm may want to use substantially more, perhaps 10 or even 20 years, or perhaps compute models for a rolling 3-year window over a 10- or 20-year interval. Most firms will run this kind of workload many times, which raises the stakes.”

Figure 1. A benchmark of the complete business problem would extend from raw data all the way through to simulated P&L. We hope it is obvious why that would be too large a scope for an initial project (and probably too large for any useful benchmark.) So the question was which parts to focus on. Source: STAC Excerpts Report derived from STAC report Toward business-driven ML benchmarks: An NLP example

All three solutions used the same analytics software stack: Python 3.5; Python 3 library spaCy 2.0.12; Python 3 library Scikit-learn 0.20.0; Intel Python 3 library MKL 2018.0.3; Python 3 library Joblib 0.12.3.

To support this, two of the SUTs provided infrastructure as a service, and one provided Spark as a service. STAC described the Google instance configurations as follows:

“v16 – A single cloud instance representative of where a user might start when looking for something bigger than a laptop at a reasonable cost, in the absence of knowledge about how the workload scales. Configuration:

  • Google Compute Engine n1-standard-16 (16 vCPUs, 60 GB memory)
  • CPU platform Intel Skylake or better
  • 20 GB Google Persistent Disk as boot disk
  • 1 TB Google Persistent Disk mounted read-only as data disk
  • Red Hat Enterprise Linux 7.6

“v96 – A single cloud instance with the most vCPUs currently available. The point was to see how well the 
workload “scaled up” without the complexity of multiple nodes. Configuration:

  • Google Compute Engine n1-standard-96 (96 vCPUs, 360 GB memory)
  • CPU platform Intel Skylake or better
  • 20 GB Google Persistent Disk as boot disk
  • 1 TB Google Persistent Disk mounted read-only as data disk
  • Red Hat Enterprise Linux 7.6

“DP-v192 – Google Cloud Dataproc (Spark as a service), using multiple nodes to double the number of cores versus the v96, with autoscaling enabled in order to limit the cost of under-utilized cores. This SUT used Dataproc simply to get access to more cores on which to run a Python script. This way we only had to write a Spark wrapper around exactly the same code as we ran on the single instances. This is a common transition path for data scientists initially trying to scale out in the cloud, but since it is neither Spark- nor cloud-native, it probably doesn’t represent optimal use of the platform. Configuration:

  • Google Dataproc image 1.2.22 with autoscaling (alpha) and minimum CPU platform = Skylake (beta)
  • Debian 8
  • 13 x Google Compute Engine n1-standard-16 (16 vCPUs, 60 GB memory). One master nodes plus 12 worker nodes.
  • 60 GB Google Persistent Disk as boot disk for each node
  • Google Cloud Storage for the input datasets and persisted results”

As you can see the results were interesting.

Table 3 shows the total elapsed time and the average cost per modeling experiment for each work set on each SUT. “For v16, we did not run the largest work set (216 experiments on 3 years of data) because the second largest (108 experiments on 3 years of data) took more than 15 hours, meaning the larger work set would take longer than a day. We arbitrarily considered the data scientist’s tolerance for elapsed time to be “overnight”, which is roughly 16 hours. At least that was our tolerance.” Source: STAC

The report also noted Google Cloud Dataproc utilized its autoscaling feature and that because that feature was still in alpha status, by STAC policy did not make the results public but included them in the full study.

STAC offered these additional observations:

  • “While it’s easy to assume that one can accelerate a workload by throwing more cores at it, this isn’t always true. In fact, this study highlights a few cases where trying to exploit additional cores slowed a workload down.
  • “For a given code base and processor type, there is a lower bound of elapsed time that cannot be overcome by scaling up or out. Individual experiments in this implementation were not able to utilize more than one vCPU. Thus, even with a surplus of cores and no platform overhead, the elapsed time for each work set is gated by its longest-running experiment. The only way to shrink that time is to improve performance per vCPU through faster code or a faster processor.
  • “As documented in this study, v96 is preferable for some workloads while v16 is preferable for others, depending on the user firm’s priorities (operating cost vs data scientist cost vs time to market). Fortunately, the fact that Google Persistent Disk makes it possible to fire up any type of instance and access the same data as other instances makes it convenient to mix and match instance types according to the task at hand.”

It will be interesting to monitor how the STAC community responds, how the exploratory benchmark evolves, and when vendors start using the STAC ML benchmark on their systems. There are, of course, many tests being used to assess AI capabilities of systems. One new effort – the MLPerf benchmark suite for assessing training and inference performance introduced last May – has attracted considerable support and recently released its first round of results (see HPCwire article, Nvidia Leads Alpha MLPerf Benchmarking Round.) Another is aimed at large and leadership class systems, (see HPCwire article, The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning).

The STAC report offers the following assessment of its effort: “We think this [initial] implementation is good enough to yield technology comparisons that can be applied to the real world. While the implementation is constructed from mostly publicly available references and is perhaps not exactly what a firm would deploy (for example a firm might highly customize the preprocessing stage of the pipeline), we believe the algorithm is sufficiently representative of the real world with respect to performance and quality to make it a useful instrument to inform real algorithmic and architectural choices. We also think it is simple enough that STAC members (users and vendors) will be able to analyze and optimize its performance, as well as introduce new libraries and techniques, without a huge effort.”

Link to STAC report: https://stacresearch.com/topic_modeling_1

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This