HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

By John Russell

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is something HPCwire has been doing yearly with the Bioteam consultancy whose “boots-on-the-street” perspective has a practical, insider feel. No surprise, AI figures more prominently in their practice this year, a change marked by Bioteam’s recent hiring of Fernanda Foertter, a former AI guru at Nvidia.

Ari Berman, BioTeam

This year’s conversation included Ari Berman, CEO of Bioteam, Chris Dagdigian, one of Bioteam’s founders, and Mike Steeves, senior scientific consultant. On the docket were processor diversity (AMD is winning while Arm hasn’t made much headway yet in LS); storage and data management (get ready to pay for what you store!); network needs and practices (perhaps not surprisingly there’s a split in practice here between academia and industry); and AI (the mashup of hype, tire-kicking, and real use continues). Part one presented here tackles processors and storage.

But first a brief prologue.

Life sciences has traditionally been a late adopter of HPC technology. The requisite HPC applications (large, tightly-coupled) weren’t there. Also, the healthcare community tends to be conservative (do no harm) preferring proven, cost-effective, and more easily supported IT. Data analytics was the early breakthrough, driven by DNA sequencing’s need for massive parallel processing. Predictive simulation remained more a work-in-progress, hobbled by gaps in basic biology understanding and the lack of sufficiently rigorous (or comprehensive) mathematical descriptions of intricate biological systems.

That picture has changed dramatically during recent years. Not only has the proliferation of instruments generating vast amounts of data mushroomed – recently led by cryo-EM and other imaging technologies – but also a steady deciphering of functional genomics and basic biology has produced more precise descriptions of biological processes that can be turned into improved simulations useful in research and the clinic. Of course, molecular modeling techniques have also advanced. Rather quickly the breadth of computational power used in life sciences expanded.

Using CANDLE deep learning to extract protein folding intermediate states. | National Cancer Institute

Now, AI has burst onto the scene transforming how we think about HPC and becoming a formidable force in life sciences. Not only is AI critical for making sense of the biomedical data flood but also it’s become an important catalyst fusing data analytics and simulation into a blended approach that’s proving remarkably effective. It is possible, for example, to use AI techniques on large video datasets from ‘living’ experiments to derive some of the first principle OD/PDEs to describe mechanistic simulation. (See HPCwire coverage, ISC Keynote: The Algorithms of Life – Scientific Computing for Systems Biology)

Pretty clearly, bio-computational research has come a long way in a fairly short time. In 2015, Bioteam’s Berman estimated ~15-25 percent of biomedical researchers used HPC in one or another form. The next year it was up to ~30-50 percent.

“The last time we talked (2019), we thought it would be up to about 75 percent,” said Berman in this year’s HPC-in-LS review. “Today, I don’t think there is a single modern life sciences research or diagnostics protocol that doesn’t use advanced computing in some way. I would be willing to say it’s somewhere between 95-100 percent of applications require advanced computing in some manner. Some of the older style research [such as] common plate readers and relies on minor statistical analytics probably don’t [require HPC], but I think those days are going by the wayside.

“I say all of this with a subtext that not everyone knows they’re using HPC. The applications, analytics stacks, etc. that front HPC systems make it look like researchers are just using another website or using an application that came with an instrument but it really is using sort of these back-end very scalable systems.”

It may be useful to note a language shift. It used to be that the HPC community and infrastructure were quite distinct from enterprise infrastructure and “non-science” users. Today those worlds are in collision and our ideas about what constitutes advanced computing are changing. AI and accelerated computing are the drivers shaping what’s become a more blended infrastructure. Very recently it’s become common to refer to the datacenter, at least conceptually, as the ‘computational unit’ able to handle a wide variety of previously distinct applications including HPC/AI. Today what constitutes advanced scale computing seems also to embrace HPC.

In one sense life science research embodies this trend as its computational needs have expanded alongside advances in computational technology itself. What follows is part one of our annual two-part look at HPC/AI in life sciences.

PROCESSOR WARS – NOT EXACTLY

The age of CPU dominance isn’t over but the battle for mindshare seems diminished as bioresearch infrastructure consumers chase price/performance in CPUs as they play a reduced role in heterogeneous architectures. Attention has shifted to GPUs – more numerous on a per systems basis and perhaps more impactful in the current scheme of things. Conversely, cutting edge AI-focused accelerators are only being aggressively piloted among big DoE labs, and still need time to mature and settle into niches before gaining wide LS acceptance. To a significant degree these trends in processor use are continuations of last year’s trends.

“The biggest change that we’ve seen is for people buying on premise equipment or the large HPC deals. All of the momentum right now is behind AMD; it has the roadmap, the benchmarking, and pricing,” said Dagdigian. “Intel doesn’t really have the greatest answer for some of these things.”

This accords with AMD’s resurgence in high-end servers broadly and in supercomputers. That said, many are watching Intel’s realignment under CEO Bob Swan and waiting to see how the forthcoming processor (Sapphire Rapids) and XE GPU line perform. The Aurora supercomputer, featuring both Intel GPUs and CPUs, will be the showcase.

Berman reports the success of DoE’s Summit supercomputer, including its ongoing work on COVID-19 research, has drawn positive attention for IBM in the life sciences community. That said, mainstream adoption of Power microprocessor-based systems has been slow and IBM hasn’t said much about upgrading the Power9 chips or provided details for Power10. Also, the OpenPOWER Foundation has moved under the Linux Foundation authority. Time will tell. Berman said, “IBM is really pushing the quantum areas and their cloud architecture and services and software services as a company.” HPC or at least Power could wind up a step child.

Interestingly, Arm’s resurgence in HPC hasn’t yet spread to life sciences. “Life scientists tend to be a little timid when it comes to new architectures. Life sciences is going to wade into the Arm territory when it’s [more established]. The resurgence in HPC in general is real and you may hear some announcements around the time of SC2020,” said Berman.

FPGA adoption in life sciences has been slow according to Bioteam despite abstraction efforts around hardware description languages to make them easier to use and development of Python libraries that could use them. “People just aren’t really seeing the bang for the buck there or really understanding how to incorporate them,” said Berman.

The GPU market is suddenly most interesting. Intel’s plunge into GPUs and AMD’s wins in big HPC systems using both AMD processors and AMD GPUs (Radeon) bear watching according to Bioteam. All agree Nvidia remains solidly ahead and its introduction last week of the Ampere A100 GPU strengthens that position. But price-performance plays well in life sciences and AMD has the edge there. So far AMD had been reluctant to compete with Nvidia in high-end GPU markets but perhaps not for long. It is noteworthy that Nvidia chose an AMD CPU (64-core Epyc) for is DGX-A100 system. And CUDA11 offers Arm64 support. Murky waters here.

Then there’s Intel much-watched GPU gambit.

“I’ll call it a strange surprise, Intel’s forging into the GPU space with Ponte Vecchio (top SKU in the its forthcoming GPU line). It looks like it can hold its own against the others, although, you know, Nvidia is still far ahead. Intel’s whole play is to create a unified platform out of CPU, GPU, storage, memory and software using oneAPI. The promise is that someone could essentially write one piece of software using oneAPI and have it equally processed without any changes to your code on a GPU or a system level CPU. That’s very interesting in some aspects,” said Berman.

At the moment, use of exotic accelerators like the Cerebras wafer-scale chip are only priorities at big testing centers such as Argonne National Laboratory, which in fact is aggressively testing as many new AI accelerator chips as it can get its hands on according to Rick Stevens, ANL’s associate laboratory director, life sciences, computing, and environment. The Cerebras chip is enormous – 1.2 trillion transistors, 400,000 AI cores. ANL has already put the Cerebras chip to work on COVID-19. More mainstream life science researchers will wait.

Cerebras AI Chip

Berman joked, “The Cerebras chip is like the size of my head, right? It is an amazing engineering feat and also a big stunt. Back to your question about these chips generally. Other than really bleeding edge problems, like some of the cancer problems they’re trying to solve in Cancer Moonshot or real-time processing of diagnostic data against known data, those sorts of things that are being worked on, there’s not a lot of application for [these chips] yet in our space. Keep in mind, you know, it took a life sciences 20 years to adopt GPUs massively.”

Steeves added, “Even with GPUs. We want to get new exciting and interesting [devices] but then you have to start rewriting codes to take advantage of it. Suddenly you see a lot less interest and demand for it. It’s probably going to take a few years for someone to put together that killer app for a particular hardware accelerator, or perhaps when there’s a paper that so interesting that I want to try it and software’s available.”

Berman noted, “In life sciences, a major step forward in utilizing things like coprocessors and better algorithms happens when someone else does the hard work of developing them. That’s because NIH doesn’t fund things like that. You know, grants aren’t going to cover multi-year development arcs for optimizing algorithms for GPUs. The only thing they cover are the results coming out work that could be published. So the incentive also isn’t there.”

STORAGE & DM – TAMING THE WILD WEST?

Storage and data management are perennial challenges in life sciences. Lattice light-sheet microscopes, for example can generate on the order of 2 or 3 terabytes in a couple of hours and they are just one of many imaging instruments generating vast datasets. Fill a room or floor with these kinds of instruments and pretty quickly you’ve generated a lot of data. Today though, the problem isn’t so much selecting and deploying needed storage capacity – that’s mostly a solved problem according to Bioteam. It’s managing the data.

Think about the growing use of machine learning and deep learning to mine all of this data for meaningful models and traditional analytics. The old garbage in-garbage out mantra applies. Beyond data quality, there’s all the meta-tagging that needs to be accomplished and tracked. Also, the data needs to be broadly accessible to collaborators and other researchers while maintaining security and confidentiality.

Chris Dagdigian, Bioteam

Focusing on storage policy, Dagdigian offered three observations and sounded almost like a revival tent preacher:

  • “There’s a practice I fully intend to steal from the DoE and the supercomputing sites. When NERSC rolled out its new all-flash 30 petabyte NVMe storage array, one of the striking things about the announcement is they are moving to no home directories at all, or no home directories of any considerable size for anybody. 100% of the new petabyte scale storage is being allocated. That’s something that I want to see pushed more in enterprise. One of the single biggest problems with the data mess we have is too many people are storing crap in their own directories, project-based, team-based stuff. It’s to the point where individual scientists might have 10-20 terabytes of stuff sitting under a home directory. That’s not findable. It’s not easily shareable. We are now at the point where personal storage is no longer on the table. If you want more than 500 gigs, we’re allocating it and it’s got to be from a project. It’s got to be in a particular area, and it’s going to follow a naming convention, a data standards convention, and you’re going to have to justify the allocation.
  • “The second thing is – I think I stole this from Amazon’s messaging around their shared responsibility model – is a phrase we’ve started to use in an assessment report that we wrote a couple months ago. [It’s] that storage is a consumable resource and it should be treated exactly the same way as an expensive laboratory consumable, something that’s no longer free or unlimited, it’s no longer on demand. Just like you’re budgeting for your reagents and your assay kits and other stuff you’re buying for your lab. That means scientists are budgeting for it, planning for it, and more importantly, they have to justify their consumption.
  • “The third and final thing is around data management, data organization, and data curation. I’ll repeat my standard buzz phrase; “If you’ve got a petabyte, and you don’t have a full-time human being managing or curating the data, not only are you wasting more in hardware cost and the cost of that data curator but you’re also setting yourself up for a lot of gnarly data management, data discovery, and data dissemination issues down the road. Bioteam has seen more storage environments in scientific settings where it almost feels like it’s the Wild West – no rules, no standards, no curation, very few SOPs. I feel like in 2020 that the unmanaged, Wild West petascale storage environment should be the exception not the rule and it’s still the rule.”

Seems like the data management religion has been discussed for years. It will be interesting to see if major changes do indeed occur.

On the storage technology front Berman said, “Not a lot has changed in the last year except this very interesting push and pull war between next generation file systems like WekaIO and Vast Data which have made a surge into this space. It’s a different way of approaching data storage and scalability and more importantly, IO availability, especially in computer architecture. The fascinating thing about those particular architectures for life sciences is they help deal with the high data diversity and IO requirements of various workflows and analytics that come from the wide diversity of data collection used throughout our domain.

“We’ve always said that Lustre is super hard for us to use because we often have millions of small files and Lustre doesn’t do that well. GPFS or Spectrum Scale (IBM) is slightly better if you know how to tune it for that, and you don’t have too much. Outside of that there wasn’t anything you could do until these two things (WekaIO and Vast Data) came up except for dealing with high performance local scratch NVMe in nodes which most people didn’t know how to use.

“So that’s been sort of an interesting shift and now that Optane (Intel) and 3D XPoint (Micron) has become more mainstream and possibly more affordable. That turns into yet another thing that can be wrangled in sort of the data and IO space, especially as a scratch layer that is even faster than anything else out there. So, you know, slow memory but very fast local storage and we’re testing some of that out now. It’s a very interesting space that I think is ripe for yet another innovation.”

Mainstays HPC storage vendors DDN (Lustre) and IBM (Spectrum Scale) still handle a lion’s share of the market. Cray, now HPE, had acquired the ClusterStor line from Seagate in 2017 and debuted a new version ClusterStor E1000 last fall. Berman suggests the traditional storage field generally and its vendors are under pressure for emerging software defined storage alternatives. He says solid state drives continue to displace platter-based technologies. Again, these trends are largely continuations from the past year.

An interesting relative newcomer is Intel’s distributed asynchronous object store (DAOS) which will be used in the Aurora supercomputer scheduled to be the first U.S. exascale system and based at ANL. It will feature Intel CPUs and GPUs (Ponte Vecchio). Intel describes DAOS as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications.”

Said Berman, “It’s too new to say much about DAOS but the concept of asynchronous IO is very interesting. It’s essentially a queue mechanism at the system write level so system waits in the processors don’t have to happen while a confirmed write back comes from the disks. So asynchronous IO allows jobs can keep running while you’re waiting on storage to happen, to a limit of course. That would really improve the data input-output pipelines in those systems. It’s a very interesting idea. I like asynchronous data writes and asynchronous storage access. I can see there very easily being corruption that creeps into those types of things and data without very careful sequencing. It will be interesting to watch. If it works it will be a big innovation.”

HPCwire will publish Part 2 in the near future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire