Stampede1 Reborn as BigTex, a Supercomputer for the Federal Reserve

By Aaron Dubrow

June 21, 2020

Recent HPCwire Featured Article – June 18, 2020

Alex Richter, a research economist from the Federal Reserve Bank of Dallas, seeks to unravel the non-linear impacts of the business cycle and monetary policy.

His research requires advanced computing to solve complicated mathematical and statistical problems. For several years, he used the modest-sized high performance computing cluster operated by the Federal Reserve Bank of Kansas City, the Ganymede cluster at the University of Texas at Dallas, as well as Stampede1 and Stampede2 at the Texas Advanced Computing Center (TACC). But he found himself needing more compute time. Moreover, he suspected there were many other research economists in the Federal Reserve System who could benefit additional computing resources.

Knowing that TACC — 200 miles south in Austin — operated several of the world’s largest supercomputers for open science research, in July 2017, Richter travelled to the center to see if he and his colleagues could gain greater access to the supercomputers there.

“I originally went down there thinking, since we’re in Dallas, and TACC is in Austin, maybe there’s a way that we could have some sort of partnership where we could get dedicated access to use Stampede,” Richter said.

He met with Dan Stanzione, TACC’s executive director, and, during a tour of the data center, noticed that several racks that had previously been part of Stampede1 were unplugged.

Cold-aisle containment for BigTex. (Credit: TACC)

The system — an Intel/Dell supercomputer with a peak speed of nearly 10 petaflops that debuted in 2012 as the seventh most powerful machine in the world — had recently been decommissioned, he learned. Twenty racks were available for donation. Would Richter be interested in taking them?

Richter was intrigued, but the idea was not without its challenges. For one, the Federal Reserve of Dallas did not have the IT support needed to set up and run the machines, nor a data center to host it.

However, Richter was undeterred. The Federal Reserve accepted the donated equipment, and he spearheaded an effort to get support for a proof-of-concept experiment. TACC would provide the hardware; Chris Simmons, UT Dallas head of research computing, would provide user support and assistance in standing up the machine; the Dallas Fed would establish a hosting environment. The end result of this collaboration would allow Federal Reserve economists from across the country to be able to use the system.

The Dallas Fed rented out space in a local datacenter, purchased two new servers to serve as a master login node and a temporary storage system, and set about creating BigTex, a supercomputer for Federal Reserve economists.

Economists Compute

Previously, the Federal Reserve System had operated several high performance computing environments that contained, in total, about 5,400 compute cores. BigTex, which came online in July 2019, added 15,000 cores — or about 3 times the capacity.

The Federal Reserve is the largest employer of research economists in the U.S. with 400 PhD-level staff. In less than a year, eight of the 13 Federal Reserve banks have signed up to use BigTex and 60 of the 400 economists have used the system to date.

Some researchers, including Richter, are using BigTex to model the non-linear impacts of policy decisions or shocks to the system. In the past, economic models either assumed simplified, linear effects to changes in interest rates or the state of the economy. But with BigTex, researchers are able to tackle more realistic scenarios.

A recent paper from Richter and his collaborators forthcoming in the Journal of Monetary Economics explores various algorithms for estimating non-linear models in cases where the short-term nominal interest rate sinks to zero, creating a ‘kink’ in most models.

“The study asks how well nonlinear solution and estimation techniques compare to linear and quasi-linear methods” Richter said. “It was our most numerically intensive project to date.”

A team from the Federal Reserve of New York, led by Marco Del Negro, is developing algorithms that allow one to quickly update previously trained and tested estimators using new data. This allows economists to rapidly determine what an outcome of a new economic decision could be based on the latest information, without having to re-analyze decades of data. These algorithms make heavy use of parallel computing and hence of BigTex.

They described their results in the Federal Reserve Bank of New York Staff Reports, August 2019.

“BigTex was a game-changer for us,” Del Negro said. “Without it we could have never finished our project.”

Serdar Birinci, an economist with the Federal Reserve Bank of St. Louis, has been exploring the best possible design of unemployment insurance (UI) payments during recessions and expansions. A more generous UI system mitigates the negative effects of job loss, but at the same time incentives staying unemployed. Using BigTex, he found that the best design of the system features more generous payment amounts and much longer payment durations in recessions, as in European policies.

“Analyzing the best design of UI system requires solving a complex economic model under different values of UI replacement rates and UI payment durations,” Birinci said. “A joint determination of these two policy instruments requires solving the model thousands of times. Without having access to BigTex, this would be almost impossible.”

“My colleagues are really happy with BigTex,” Richter said. “It opens the door to research that was previously not able to be done. It’s more efficient and people are benefitting.”

The burgeoning relationship with UT Dallas, TACC, and data center operators is another success. “The fact that we built these partnerships is a very big deal,” he said. “We took an initiative at the local level and turned it into something that has benefited the organization at the national level.”

Not only does the research help economists at the Federal Reserve — it serves as a model for economics researchers in academia and industry.

“TACC produces pie charts of who’s using their resources. One thing that stands out… economists aren’t in them,” Richter said. “It’s important to get more economists and social scientists into the advanced computing fold. I think that’s something worth continuing.”

Header image: BigTex at Flexential Plano data center.

About the Author

Aaron Dubrow is a Science And Technology Writer with the Communications, Media & Design Group at the Texas Advanced Computing Center.

Return to Solution Channel Homepage

Watch Intel @ ISC HIGH PERFORMANCE 2020 DIGITAL

Interactive Demos

Podcasts

Follow @IntelHPC

Intel Resources

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It also introduced the D-Wave Launch program intended to jump st Read more…

By John Russell

What’s New in Computing vs. COVID-19: AMD, Remdesivir, Fab Spending & More

September 29, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

Global QC Market Projected to Grow to More Than $800 million by 2024

September 28, 2020

The Quantum Economic Development Consortium (QED-C) and Hyperion Research are projecting that the global quantum computing (QC) market - worth an estimated $320 million in 2020 - will grow at an anticipated 27% CAGR betw Read more…

By Staff Reports

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Committee last week accepted a subcommittee report calling for a t Read more…

By John Russell

Supercomputer Research Aims to Supercharge COVID-19 Antiviral Remdesivir

September 25, 2020

Remdesivir is one of a handful of therapeutic antiviral drugs that have been proven to improve outcomes for COVID-19 patients, and as such, is a crucial weapon in the fight against the pandemic – especially in the abse Read more…

By Oliver Peckham

AWS Solution Channel

The Water Institute of the Gulf runs compute-heavy storm surge and wave simulations on AWS

The Water Institute of the Gulf (Water Institute) runs its storm surge and wave analysis models on Amazon Web Services (AWS)—a task that sometimes requires large bursts of compute power. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It a Read more…

By John Russell

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Commit Read more…

By John Russell

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics code. These optimizations will be incorporated into release 2.15 with patches available for earlier versions. Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This