HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

By John Russell

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is something HPCwire has been doing yearly with the Bioteam consultancy whose “boots-on-the-street” perspective has a practical, insider feel. No surprise, AI figures more prominently in their practice this year, a change marked by Bioteam’s recent hiring of Fernanda Foertter, a former AI guru at Nvidia.

Ari Berman, BioTeam

This year’s conversation included Ari Berman, CEO of Bioteam, Chris Dagdigian, one of Bioteam’s founders, and Mike Steeves, senior scientific consultant. On the docket were processor diversity (AMD is winning while Arm hasn’t made much headway yet in LS); storage and data management (get ready to pay for what you store!); network needs and practices (perhaps not surprisingly there’s a split in practice here between academia and industry); and AI (the mashup of hype, tire-kicking, and real use continues). Part one presented here tackles processors and storage.

But first a brief prologue.

Life sciences has traditionally been a late adopter of HPC technology. The requisite HPC applications (large, tightly-coupled) weren’t there. Also, the healthcare community tends to be conservative (do no harm) preferring proven, cost-effective, and more easily supported IT. Data analytics was the early breakthrough, driven by DNA sequencing’s need for massive parallel processing. Predictive simulation remained more a work-in-progress, hobbled by gaps in basic biology understanding and the lack of sufficiently rigorous (or comprehensive) mathematical descriptions of intricate biological systems.

That picture has changed dramatically during recent years. Not only has the proliferation of instruments generating vast amounts of data mushroomed – recently led by cryo-EM and other imaging technologies – but also a steady deciphering of functional genomics and basic biology has produced more precise descriptions of biological processes that can be turned into improved simulations useful in research and the clinic. Of course, molecular modeling techniques have also advanced. Rather quickly the breadth of computational power used in life sciences expanded.

Using CANDLE deep learning to extract protein folding intermediate states. | National Cancer Institute

Now, AI has burst onto the scene transforming how we think about HPC and becoming a formidable force in life sciences. Not only is AI critical for making sense of the biomedical data flood but also it’s become an important catalyst fusing data analytics and simulation into a blended approach that’s proving remarkably effective. It is possible, for example, to use AI techniques on large video datasets from ‘living’ experiments to derive some of the first principle OD/PDEs to describe mechanistic simulation. (See HPCwire coverage, ISC Keynote: The Algorithms of Life – Scientific Computing for Systems Biology)

Pretty clearly, bio-computational research has come a long way in a fairly short time. In 2015, Bioteam’s Berman estimated ~15-25 percent of biomedical researchers used HPC in one or another form. The next year it was up to ~30-50 percent.

“The last time we talked (2019), we thought it would be up to about 75 percent,” said Berman in this year’s HPC-in-LS review. “Today, I don’t think there is a single modern life sciences research or diagnostics protocol that doesn’t use advanced computing in some way. I would be willing to say it’s somewhere between 95-100 percent of applications require advanced computing in some manner. Some of the older style research [such as] common plate readers and relies on minor statistical analytics probably don’t [require HPC], but I think those days are going by the wayside.

“I say all of this with a subtext that not everyone knows they’re using HPC. The applications, analytics stacks, etc. that front HPC systems make it look like researchers are just using another website or using an application that came with an instrument but it really is using sort of these back-end very scalable systems.”

It may be useful to note a language shift. It used to be that the HPC community and infrastructure were quite distinct from enterprise infrastructure and “non-science” users. Today those worlds are in collision and our ideas about what constitutes advanced computing are changing. AI and accelerated computing are the drivers shaping what’s become a more blended infrastructure. Very recently it’s become common to refer to the datacenter, at least conceptually, as the ‘computational unit’ able to handle a wide variety of previously distinct applications including HPC/AI. Today what constitutes advanced scale computing seems also to embrace HPC.

In one sense life science research embodies this trend as its computational needs have expanded alongside advances in computational technology itself. What follows is part one of our annual two-part look at HPC/AI in life sciences.

PROCESSOR WARS – NOT EXACTLY

The age of CPU dominance isn’t over but the battle for mindshare seems diminished as bioresearch infrastructure consumers chase price/performance in CPUs as they play a reduced role in heterogeneous architectures. Attention has shifted to GPUs – more numerous on a per systems basis and perhaps more impactful in the current scheme of things. Conversely, cutting edge AI-focused accelerators are only being aggressively piloted among big DoE labs, and still need time to mature and settle into niches before gaining wide LS acceptance. To a significant degree these trends in processor use are continuations of last year’s trends.

“The biggest change that we’ve seen is for people buying on premise equipment or the large HPC deals. All of the momentum right now is behind AMD; it has the roadmap, the benchmarking, and pricing,” said Dagdigian. “Intel doesn’t really have the greatest answer for some of these things.”

This accords with AMD’s resurgence in high-end servers broadly and in supercomputers. That said, many are watching Intel’s realignment under CEO Bob Swan and waiting to see how the forthcoming processor (Sapphire Rapids) and XE GPU line perform. The Aurora supercomputer, featuring both Intel GPUs and CPUs, will be the showcase.

Berman reports the success of DoE’s Summit supercomputer, including its ongoing work on COVID-19 research, has drawn positive attention for IBM in the life sciences community. That said, mainstream adoption of Power microprocessor-based systems has been slow and IBM hasn’t said much about upgrading the Power9 chips or provided details for Power10. Also, the OpenPOWER Foundation has moved under the Linux Foundation authority. Time will tell. Berman said, “IBM is really pushing the quantum areas and their cloud architecture and services and software services as a company.” HPC or at least Power could wind up a step child.

Interestingly, Arm’s resurgence in HPC hasn’t yet spread to life sciences. “Life scientists tend to be a little timid when it comes to new architectures. Life sciences is going to wade into the Arm territory when it’s [more established]. The resurgence in HPC in general is real and you may hear some announcements around the time of SC2020,” said Berman.

FPGA adoption in life sciences has been slow according to Bioteam despite abstraction efforts around hardware description languages to make them easier to use and development of Python libraries that could use them. “People just aren’t really seeing the bang for the buck there or really understanding how to incorporate them,” said Berman.

The GPU market is suddenly most interesting. Intel’s plunge into GPUs and AMD’s wins in big HPC systems using both AMD processors and AMD GPUs (Radeon) bear watching according to Bioteam. All agree Nvidia remains solidly ahead and its introduction last week of the Ampere A100 GPU strengthens that position. But price-performance plays well in life sciences and AMD has the edge there. So far AMD had been reluctant to compete with Nvidia in high-end GPU markets but perhaps not for long. It is noteworthy that Nvidia chose an AMD CPU (64-core Epyc) for is DGX-A100 system. And CUDA11 offers Arm64 support. Murky waters here.

Then there’s Intel much-watched GPU gambit.

“I’ll call it a strange surprise, Intel’s forging into the GPU space with Ponte Vecchio (top SKU in the its forthcoming GPU line). It looks like it can hold its own against the others, although, you know, Nvidia is still far ahead. Intel’s whole play is to create a unified platform out of CPU, GPU, storage, memory and software using oneAPI. The promise is that someone could essentially write one piece of software using oneAPI and have it equally processed without any changes to your code on a GPU or a system level CPU. That’s very interesting in some aspects,” said Berman.

At the moment, use of exotic accelerators like the Cerebras wafer-scale chip are only priorities at big testing centers such as Argonne National Laboratory, which in fact is aggressively testing as many new AI accelerator chips as it can get its hands on according to Rick Stevens, ANL’s associate laboratory director, life sciences, computing, and environment. The Cerebras chip is enormous – 1.2 trillion transistors, 400,000 AI cores. ANL has already put the Cerebras chip to work on COVID-19. More mainstream life science researchers will wait.

Cerebras AI Chip

Berman joked, “The Cerebras chip is like the size of my head, right? It is an amazing engineering feat and also a big stunt. Back to your question about these chips generally. Other than really bleeding edge problems, like some of the cancer problems they’re trying to solve in Cancer Moonshot or real-time processing of diagnostic data against known data, those sorts of things that are being worked on, there’s not a lot of application for [these chips] yet in our space. Keep in mind, you know, it took a life sciences 20 years to adopt GPUs massively.”

Steeves added, “Even with GPUs. We want to get new exciting and interesting [devices] but then you have to start rewriting codes to take advantage of it. Suddenly you see a lot less interest and demand for it. It’s probably going to take a few years for someone to put together that killer app for a particular hardware accelerator, or perhaps when there’s a paper that so interesting that I want to try it and software’s available.”

Berman noted, “In life sciences, a major step forward in utilizing things like coprocessors and better algorithms happens when someone else does the hard work of developing them. That’s because NIH doesn’t fund things like that. You know, grants aren’t going to cover multi-year development arcs for optimizing algorithms for GPUs. The only thing they cover are the results coming out work that could be published. So the incentive also isn’t there.”

STORAGE & DM – TAMING THE WILD WEST?

Storage and data management are perennial challenges in life sciences. Lattice light-sheet microscopes, for example can generate on the order of 2 or 3 terabytes in a couple of hours and they are just one of many imaging instruments generating vast datasets. Fill a room or floor with these kinds of instruments and pretty quickly you’ve generated a lot of data. Today though, the problem isn’t so much selecting and deploying needed storage capacity – that’s mostly a solved problem according to Bioteam. It’s managing the data.

Think about the growing use of machine learning and deep learning to mine all of this data for meaningful models and traditional analytics. The old garbage in-garbage out mantra applies. Beyond data quality, there’s all the meta-tagging that needs to be accomplished and tracked. Also, the data needs to be broadly accessible to collaborators and other researchers while maintaining security and confidentiality.

Chris Dagdigian, Bioteam

Focusing on storage policy, Dagdigian offered three observations and sounded almost like a revival tent preacher:

  • “There’s a practice I fully intend to steal from the DoE and the supercomputing sites. When NERSC rolled out its new all-flash 30 petabyte NVMe storage array, one of the striking things about the announcement is they are moving to no home directories at all, or no home directories of any considerable size for anybody. 100% of the new petabyte scale storage is being allocated. That’s something that I want to see pushed more in enterprise. One of the single biggest problems with the data mess we have is too many people are storing crap in their own directories, project-based, team-based stuff. It’s to the point where individual scientists might have 10-20 terabytes of stuff sitting under a home directory. That’s not findable. It’s not easily shareable. We are now at the point where personal storage is no longer on the table. If you want more than 500 gigs, we’re allocating it and it’s got to be from a project. It’s got to be in a particular area, and it’s going to follow a naming convention, a data standards convention, and you’re going to have to justify the allocation.
  • “The second thing is – I think I stole this from Amazon’s messaging around their shared responsibility model – is a phrase we’ve started to use in an assessment report that we wrote a couple months ago. [It’s] that storage is a consumable resource and it should be treated exactly the same way as an expensive laboratory consumable, something that’s no longer free or unlimited, it’s no longer on demand. Just like you’re budgeting for your reagents and your assay kits and other stuff you’re buying for your lab. That means scientists are budgeting for it, planning for it, and more importantly, they have to justify their consumption.
  • “The third and final thing is around data management, data organization, and data curation. I’ll repeat my standard buzz phrase; “If you’ve got a petabyte, and you don’t have a full-time human being managing or curating the data, not only are you wasting more in hardware cost and the cost of that data curator but you’re also setting yourself up for a lot of gnarly data management, data discovery, and data dissemination issues down the road. Bioteam has seen more storage environments in scientific settings where it almost feels like it’s the Wild West – no rules, no standards, no curation, very few SOPs. I feel like in 2020 that the unmanaged, Wild West petascale storage environment should be the exception not the rule and it’s still the rule.”

Seems like the data management religion has been discussed for years. It will be interesting to see if major changes do indeed occur.

On the storage technology front Berman said, “Not a lot has changed in the last year except this very interesting push and pull war between next generation file systems like WekaIO and Vast Data which have made a surge into this space. It’s a different way of approaching data storage and scalability and more importantly, IO availability, especially in computer architecture. The fascinating thing about those particular architectures for life sciences is they help deal with the high data diversity and IO requirements of various workflows and analytics that come from the wide diversity of data collection used throughout our domain.

“We’ve always said that Lustre is super hard for us to use because we often have millions of small files and Lustre doesn’t do that well. GPFS or Spectrum Scale (IBM) is slightly better if you know how to tune it for that, and you don’t have too much. Outside of that there wasn’t anything you could do until these two things (WekaIO and Vast Data) came up except for dealing with high performance local scratch NVMe in nodes which most people didn’t know how to use.

“So that’s been sort of an interesting shift and now that Optane (Intel) and 3D XPoint (Micron) has become more mainstream and possibly more affordable. That turns into yet another thing that can be wrangled in sort of the data and IO space, especially as a scratch layer that is even faster than anything else out there. So, you know, slow memory but very fast local storage and we’re testing some of that out now. It’s a very interesting space that I think is ripe for yet another innovation.”

Mainstays HPC storage vendors DDN (Lustre) and IBM (Spectrum Scale) still handle a lion’s share of the market. Cray, now HPE, had acquired the ClusterStor line from Seagate in 2017 and debuted a new version ClusterStor E1000 last fall. Berman suggests the traditional storage field generally and its vendors are under pressure for emerging software defined storage alternatives. He says solid state drives continue to displace platter-based technologies. Again, these trends are largely continuations from the past year.

An interesting relative newcomer is Intel’s distributed asynchronous object store (DAOS) which will be used in the Aurora supercomputer scheduled to be the first U.S. exascale system and based at ANL. It will feature Intel CPUs and GPUs (Ponte Vecchio). Intel describes DAOS as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications.”

Said Berman, “It’s too new to say much about DAOS but the concept of asynchronous IO is very interesting. It’s essentially a queue mechanism at the system write level so system waits in the processors don’t have to happen while a confirmed write back comes from the disks. So asynchronous IO allows jobs can keep running while you’re waiting on storage to happen, to a limit of course. That would really improve the data input-output pipelines in those systems. It’s a very interesting idea. I like asynchronous data writes and asynchronous storage access. I can see there very easily being corruption that creeps into those types of things and data without very careful sequencing. It will be interesting to watch. If it works it will be a big innovation.”

HPCwire will publish Part 2 in the near future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire