SDSC’s ‘Trestles’ Supercomputer Still Going Strong Three+ Years Later

By Jan Zverina

December 21, 2018

Supercomputers typically have a useful life of about five years, as these high-performance systems, many running 24/7, slowly succumb to burn-out – of their nodes, that is – as well as steady advances in processing technologies.

Not so with Trestles, which was acquired more than three years ago by the Arkansas High Performance Computing Center (AHPCC) at the University of Arkansas after entering service at the San Diego Supercomputer Center (SDSC) at UC San Diego in mid-2011 under a $2.8 million National Science Foundation (NSF) grant.

(L to R) AHPCC Director David Chaffin; Director of Strategic Initiatives & User Services Jeff Pummill; and Senior Administrator/Program Director Pawel Wolinski, with the Trestles supercomputer. Image courtesy of AHPCC

Billed as a “high-productivity workhorse,” Trestles was based on the concept that by tailoring a system for the majority of modest-scale jobs rather than a handful of researchers who run jobs at thousands of core counts, users could achieve higher throughput and increased scientific productivity.

While at SDSC, Trestles users spanned a wide range of domain applications, including astronomy, biophysics, climate science, computational chemistry, materials science, and more. It was also recognized as a leading platform for science gateway applications; for example, the system served more than 650 users per month via the popular CIPRES phylogenetics portal alone.

“It’s terrific that University of Arkansas researchers have been able to use Trestles for several years beyond its decommissioning as a national NSF resource and to extend the scientific impact of NSF’s HPC investments,” said Richard Moore, the principal investigator for the Trestles award and SDSC’s now-retired deputy director.

Trestles continues to deliver on that strategy today, more than three years into its “next life” as a valuable research resource at the U of A. AHPCC’s latest estimates are that during that time, Trestles has provided more than 136 million CPU hours of service, with over 804,000 jobs run among almost 200 active users.

Trestles came to us at a time where computational needs were peaking in the form of explosive growth and demand in the faculty researcher community,” said AHPCC Director of Strategic Initiatives & User Services Jeff Pummill, who is also a Trestles user in the area of multi-omics, primarily with the U of A’s Biological Sciences and Agricultural departments. “Queue wait times were getting unacceptably long and jobs were stacking up. So the arrival of 8000+ compute cores was a welcome sight for all of us.”

Pummill noted that architecturally, Trestles has been ideal for work in the areas of bioinformatics and genomics, as its software is typically Shared Memory Parallel (SMP), which uses multiple processors on the same computer, as opposed to Distributed Memory Parallel (DMP), which uses multiple processors on either the same or multiple computers. “Trestles’ nodes are configured with 32 compute cores and 64 gigs of memory, which is ideal for smaller bacterial genome work, but useful for many aspects of larger eukaryotic genome work,” he added.


What’s in a name? Some Trestles Trivia

After being transferred to the Arkansas High Performance Computing Center, it was decided to keep the Trestles name. But why Trestles in the first place?

“I was taking up surfing at the time we proposed this system to the NSF, and thought that the Trestles Bridge in San Diego would be a nice way to acknowledge both the local aspect of the system, as well as the idea that it was a bridge to using high-performance computing,” according to Shawn Strande, SDSC’s deputy director.


Research Highlights

Some examples of research projects using Trestles at the U of A include:

  • Materials Engineering: A research team including Salvador Barraza-Lopez, associate professor of physics at the U of A, and Taneshwor Kaloni, a former post-doctoral researcher in Barraza-Lopez’s lab, shed light on the behavior of one of ultrathin materials known as tin telluride (SnTe). The study detailing their findings was published in the journal Advanced Materials.
  • Neurosciences: Vidit Agrawal, a graduate student in the U of A’s Physics Department has been using Trestles to perform simulations of large neural networks and conduct a statistical analysis on experimental results.
  • Supply chain analysis: Agrawal has also used Trestles to investigate the structural fragility of supply networks and explore its relationship with a firm’s equity risk. “AHPCC has been of great help to me as it has cut down my overall computation time from months to days.”
  • Microbiome research: Jiangchao Zhao, an assistant professor with the U of A’s Department of Animal Science, used Trestles to identify gut microbiome signatures that when associated with longevity provides a promising modulation target for healthy aging.

Additional research projects can be found here.

Re-use, Not Recycling

While many supercomputers still end up on the scrap heap, the continued operation of Trestles beyond its expected lifespan is just one example of lasting computational power and productivity.

In early 2017, SDSC and the Simons Foundation’s Flatiron Institute in New York reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer would be used by Simons for ongoing research following completion of the system’s tenure as a NSF resource on March 31 of that year, following five years of service. While Gordon is now primarily used by the Simons Foundation, the system remains housed in SDSC’s data center.

“It’s very gratifying to see SDSC’s HPC systems continue to serve a wide range of researchers following their NSF tenures,” said SDSC Director Michael Norman. “For us, it’s testimony to designing a robust architecture from the start, which contributes to their useful lives well beyond what’s typical for such systems.”

In early 2018, the NSF extended the use of SDSC’s current petascale system, Comet, for a sixth year of service, into March of 2021. Comet is now one of the most widely used supercomputers in the NSF’s XSEDE program. Under a separate NSF award valued at about $900,000 SDSC recently doubled the number of graphic processing units (GPUs) on Comet in direct response to growing demand for GPU computing among a wide range of research domains.

About AHPCC

The Arkansas High Performance Computing Center, a core research facility under the Office of Research and Innovation at the University of Arkansas and founded in 2008, supports research for about 260 users in about 30 academic areas across the University of Arkansas campus, including bioinformatics, condensed matter physics, integrated nanoscience, computational chemistry, computational biomagnetics, materials science, spatial science, and economics among others.

About SDSC

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s petascale Comet supercomputer is a key resource within the National Science Foundation’s XSEDE (eXtreme Science and Engineering Discovery Environment) program.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterized as transforming data into insights – which is exactly wh Read more…

By James Reinders

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPCwire Awards Highlight Supercomputing Achievements in the Sciences

January 3, 2019

In November at SC18 in Dallas, HPCwire Readers’ and Editors’ Choice awards program commemorated its 15th year of honoring achievement in HPC, with categories ranging from Best Use of AI to the Workforce Diversity Leadership Award and recipients across a wide variety of industrial and research sectors. Read more…

By the Editorial Team

White House Top Science Post Filled After Two-Year Vacancy

January 3, 2019

Half-way into Trump's term, the Senate has confirmed a director for the Office of Science and Technology Policy (OSTP), the agency that coordinates science poli Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This