HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

By John Russell

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is something HPCwire has been doing yearly with the Bioteam consultancy whose “boots-on-the-street” perspective has a practical, insider feel. No surprise, AI figures more prominently in their practice this year, a change marked by Bioteam’s recent hiring of Fernanda Foertter, a former AI guru at Nvidia.

Ari Berman, BioTeam

This year’s conversation included Ari Berman, CEO of Bioteam, Chris Dagdigian, one of Bioteam’s founders, and Mike Steeves, senior scientific consultant. On the docket were processor diversity (AMD is winning while Arm hasn’t made much headway yet in LS); storage and data management (get ready to pay for what you store!); network needs and practices (perhaps not surprisingly there’s a split in practice here between academia and industry); and AI (the mashup of hype, tire-kicking, and real use continues). Part one presented here tackles processors and storage.

But first a brief prologue.

Life sciences has traditionally been a late adopter of HPC technology. The requisite HPC applications (large, tightly-coupled) weren’t there. Also, the healthcare community tends to be conservative (do no harm) preferring proven, cost-effective, and more easily supported IT. Data analytics was the early breakthrough, driven by DNA sequencing’s need for massive parallel processing. Predictive simulation remained more a work-in-progress, hobbled by gaps in basic biology understanding and the lack of sufficiently rigorous (or comprehensive) mathematical descriptions of intricate biological systems.

That picture has changed dramatically during recent years. Not only has the proliferation of instruments generating vast amounts of data mushroomed – recently led by cryo-EM and other imaging technologies – but also a steady deciphering of functional genomics and basic biology has produced more precise descriptions of biological processes that can be turned into improved simulations useful in research and the clinic. Of course, molecular modeling techniques have also advanced. Rather quickly the breadth of computational power used in life sciences expanded.

Using CANDLE deep learning to extract protein folding intermediate states. | National Cancer Institute

Now, AI has burst onto the scene transforming how we think about HPC and becoming a formidable force in life sciences. Not only is AI critical for making sense of the biomedical data flood but also it’s become an important catalyst fusing data analytics and simulation into a blended approach that’s proving remarkably effective. It is possible, for example, to use AI techniques on large video datasets from ‘living’ experiments to derive some of the first principle OD/PDEs to describe mechanistic simulation. (See HPCwire coverage, ISC Keynote: The Algorithms of Life – Scientific Computing for Systems Biology)

Pretty clearly, bio-computational research has come a long way in a fairly short time. In 2015, Bioteam’s Berman estimated ~15-25 percent of biomedical researchers used HPC in one or another form. The next year it was up to ~30-50 percent.

“The last time we talked (2019), we thought it would be up to about 75 percent,” said Berman in this year’s HPC-in-LS review. “Today, I don’t think there is a single modern life sciences research or diagnostics protocol that doesn’t use advanced computing in some way. I would be willing to say it’s somewhere between 95-100 percent of applications require advanced computing in some manner. Some of the older style research [such as] common plate readers and relies on minor statistical analytics probably don’t [require HPC], but I think those days are going by the wayside.

“I say all of this with a subtext that not everyone knows they’re using HPC. The applications, analytics stacks, etc. that front HPC systems make it look like researchers are just using another website or using an application that came with an instrument but it really is using sort of these back-end very scalable systems.”

It may be useful to note a language shift. It used to be that the HPC community and infrastructure were quite distinct from enterprise infrastructure and “non-science” users. Today those worlds are in collision and our ideas about what constitutes advanced computing are changing. AI and accelerated computing are the drivers shaping what’s become a more blended infrastructure. Very recently it’s become common to refer to the datacenter, at least conceptually, as the ‘computational unit’ able to handle a wide variety of previously distinct applications including HPC/AI. Today what constitutes advanced scale computing seems also to embrace HPC.

In one sense life science research embodies this trend as its computational needs have expanded alongside advances in computational technology itself. What follows is part one of our annual two-part look at HPC/AI in life sciences.

PROCESSOR WARS – NOT EXACTLY

The age of CPU dominance isn’t over but the battle for mindshare seems diminished as bioresearch infrastructure consumers chase price/performance in CPUs as they play a reduced role in heterogeneous architectures. Attention has shifted to GPUs – more numerous on a per systems basis and perhaps more impactful in the current scheme of things. Conversely, cutting edge AI-focused accelerators are only being aggressively piloted among big DoE labs, and still need time to mature and settle into niches before gaining wide LS acceptance. To a significant degree these trends in processor use are continuations of last year’s trends.

“The biggest change that we’ve seen is for people buying on premise equipment or the large HPC deals. All of the momentum right now is behind AMD; it has the roadmap, the benchmarking, and pricing,” said Dagdigian. “Intel doesn’t really have the greatest answer for some of these things.”

This accords with AMD’s resurgence in high-end servers broadly and in supercomputers. That said, many are watching Intel’s realignment under CEO Bob Swan and waiting to see how the forthcoming processor (Sapphire Rapids) and XE GPU line perform. The Aurora supercomputer, featuring both Intel GPUs and CPUs, will be the showcase.

Berman reports the success of DoE’s Summit supercomputer, including its ongoing work on COVID-19 research, has drawn positive attention for IBM in the life sciences community. That said, mainstream adoption of Power microprocessor-based systems has been slow and IBM hasn’t said much about upgrading the Power9 chips or provided details for Power10. Also, the OpenPOWER Foundation has moved under the Linux Foundation authority. Time will tell. Berman said, “IBM is really pushing the quantum areas and their cloud architecture and services and software services as a company.” HPC or at least Power could wind up a step child.

Interestingly, Arm’s resurgence in HPC hasn’t yet spread to life sciences. “Life scientists tend to be a little timid when it comes to new architectures. Life sciences is going to wade into the Arm territory when it’s [more established]. The resurgence in HPC in general is real and you may hear some announcements around the time of SC2020,” said Berman.

FPGA adoption in life sciences has been slow according to Bioteam despite abstraction efforts around hardware description languages to make them easier to use and development of Python libraries that could use them. “People just aren’t really seeing the bang for the buck there or really understanding how to incorporate them,” said Berman.

The GPU market is suddenly most interesting. Intel’s plunge into GPUs and AMD’s wins in big HPC systems using both AMD processors and AMD GPUs (Radeon) bear watching according to Bioteam. All agree Nvidia remains solidly ahead and its introduction last week of the Ampere A100 GPU strengthens that position. But price-performance plays well in life sciences and AMD has the edge there. So far AMD had been reluctant to compete with Nvidia in high-end GPU markets but perhaps not for long. It is noteworthy that Nvidia chose an AMD CPU (64-core Epyc) for is DGX-A100 system. And CUDA11 offers Arm64 support. Murky waters here.

Then there’s Intel much-watched GPU gambit.

“I’ll call it a strange surprise, Intel’s forging into the GPU space with Ponte Vecchio (top SKU in the its forthcoming GPU line). It looks like it can hold its own against the others, although, you know, Nvidia is still far ahead. Intel’s whole play is to create a unified platform out of CPU, GPU, storage, memory and software using oneAPI. The promise is that someone could essentially write one piece of software using oneAPI and have it equally processed without any changes to your code on a GPU or a system level CPU. That’s very interesting in some aspects,” said Berman.

At the moment, use of exotic accelerators like the Cerebras wafer-scale chip are only priorities at big testing centers such as Argonne National Laboratory, which in fact is aggressively testing as many new AI accelerator chips as it can get its hands on according to Rick Stevens, ANL’s associate laboratory director, life sciences, computing, and environment. The Cerebras chip is enormous – 1.2 trillion transistors, 400,000 AI cores. ANL has already put the Cerebras chip to work on COVID-19. More mainstream life science researchers will wait.

Cerebras AI Chip

Berman joked, “The Cerebras chip is like the size of my head, right? It is an amazing engineering feat and also a big stunt. Back to your question about these chips generally. Other than really bleeding edge problems, like some of the cancer problems they’re trying to solve in Cancer Moonshot or real-time processing of diagnostic data against known data, those sorts of things that are being worked on, there’s not a lot of application for [these chips] yet in our space. Keep in mind, you know, it took a life sciences 20 years to adopt GPUs massively.”

Steeves added, “Even with GPUs. We want to get new exciting and interesting [devices] but then you have to start rewriting codes to take advantage of it. Suddenly you see a lot less interest and demand for it. It’s probably going to take a few years for someone to put together that killer app for a particular hardware accelerator, or perhaps when there’s a paper that so interesting that I want to try it and software’s available.”

Berman noted, “In life sciences, a major step forward in utilizing things like coprocessors and better algorithms happens when someone else does the hard work of developing them. That’s because NIH doesn’t fund things like that. You know, grants aren’t going to cover multi-year development arcs for optimizing algorithms for GPUs. The only thing they cover are the results coming out work that could be published. So the incentive also isn’t there.”

STORAGE & DM – TAMING THE WILD WEST?

Storage and data management are perennial challenges in life sciences. Lattice light-sheet microscopes, for example can generate on the order of 2 or 3 terabytes in a couple of hours and they are just one of many imaging instruments generating vast datasets. Fill a room or floor with these kinds of instruments and pretty quickly you’ve generated a lot of data. Today though, the problem isn’t so much selecting and deploying needed storage capacity – that’s mostly a solved problem according to Bioteam. It’s managing the data.

Think about the growing use of machine learning and deep learning to mine all of this data for meaningful models and traditional analytics. The old garbage in-garbage out mantra applies. Beyond data quality, there’s all the meta-tagging that needs to be accomplished and tracked. Also, the data needs to be broadly accessible to collaborators and other researchers while maintaining security and confidentiality.

Chris Dagdigian, Bioteam

Focusing on storage policy, Dagdigian offered three observations and sounded almost like a revival tent preacher:

  • “There’s a practice I fully intend to steal from the DoE and the supercomputing sites. When NERSC rolled out its new all-flash 30 petabyte NVMe storage array, one of the striking things about the announcement is they are moving to no home directories at all, or no home directories of any considerable size for anybody. 100% of the new petabyte scale storage is being allocated. That’s something that I want to see pushed more in enterprise. One of the single biggest problems with the data mess we have is too many people are storing crap in their own directories, project-based, team-based stuff. It’s to the point where individual scientists might have 10-20 terabytes of stuff sitting under a home directory. That’s not findable. It’s not easily shareable. We are now at the point where personal storage is no longer on the table. If you want more than 500 gigs, we’re allocating it and it’s got to be from a project. It’s got to be in a particular area, and it’s going to follow a naming convention, a data standards convention, and you’re going to have to justify the allocation.
  • “The second thing is – I think I stole this from Amazon’s messaging around their shared responsibility model – is a phrase we’ve started to use in an assessment report that we wrote a couple months ago. [It’s] that storage is a consumable resource and it should be treated exactly the same way as an expensive laboratory consumable, something that’s no longer free or unlimited, it’s no longer on demand. Just like you’re budgeting for your reagents and your assay kits and other stuff you’re buying for your lab. That means scientists are budgeting for it, planning for it, and more importantly, they have to justify their consumption.
  • “The third and final thing is around data management, data organization, and data curation. I’ll repeat my standard buzz phrase; “If you’ve got a petabyte, and you don’t have a full-time human being managing or curating the data, not only are you wasting more in hardware cost and the cost of that data curator but you’re also setting yourself up for a lot of gnarly data management, data discovery, and data dissemination issues down the road. Bioteam has seen more storage environments in scientific settings where it almost feels like it’s the Wild West – no rules, no standards, no curation, very few SOPs. I feel like in 2020 that the unmanaged, Wild West petascale storage environment should be the exception not the rule and it’s still the rule.”

Seems like the data management religion has been discussed for years. It will be interesting to see if major changes do indeed occur.

On the storage technology front Berman said, “Not a lot has changed in the last year except this very interesting push and pull war between next generation file systems like WekaIO and Vast Data which have made a surge into this space. It’s a different way of approaching data storage and scalability and more importantly, IO availability, especially in computer architecture. The fascinating thing about those particular architectures for life sciences is they help deal with the high data diversity and IO requirements of various workflows and analytics that come from the wide diversity of data collection used throughout our domain.

“We’ve always said that Lustre is super hard for us to use because we often have millions of small files and Lustre doesn’t do that well. GPFS or Spectrum Scale (IBM) is slightly better if you know how to tune it for that, and you don’t have too much. Outside of that there wasn’t anything you could do until these two things (WekaIO and Vast Data) came up except for dealing with high performance local scratch NVMe in nodes which most people didn’t know how to use.

“So that’s been sort of an interesting shift and now that Optane (Intel) and 3D XPoint (Micron) has become more mainstream and possibly more affordable. That turns into yet another thing that can be wrangled in sort of the data and IO space, especially as a scratch layer that is even faster than anything else out there. So, you know, slow memory but very fast local storage and we’re testing some of that out now. It’s a very interesting space that I think is ripe for yet another innovation.”

Mainstays HPC storage vendors DDN (Lustre) and IBM (Spectrum Scale) still handle a lion’s share of the market. Cray, now HPE, had acquired the ClusterStor line from Seagate in 2017 and debuted a new version ClusterStor E1000 last fall. Berman suggests the traditional storage field generally and its vendors are under pressure for emerging software defined storage alternatives. He says solid state drives continue to displace platter-based technologies. Again, these trends are largely continuations from the past year.

An interesting relative newcomer is Intel’s distributed asynchronous object store (DAOS) which will be used in the Aurora supercomputer scheduled to be the first U.S. exascale system and based at ANL. It will feature Intel CPUs and GPUs (Ponte Vecchio). Intel describes DAOS as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications.”

Said Berman, “It’s too new to say much about DAOS but the concept of asynchronous IO is very interesting. It’s essentially a queue mechanism at the system write level so system waits in the processors don’t have to happen while a confirmed write back comes from the disks. So asynchronous IO allows jobs can keep running while you’re waiting on storage to happen, to a limit of course. That would really improve the data input-output pipelines in those systems. It’s a very interesting idea. I like asynchronous data writes and asynchronous storage access. I can see there very easily being corruption that creeps into those types of things and data without very careful sequencing. It will be interesting to watch. If it works it will be a big innovation.”

HPCwire will publish Part 2 in the near future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

EU Spending €28 Million on AI Upgrade to Leonardo Supercomputer

September 19, 2024

The seventh fastest supercomputer in the world, Leonardo, is getting a major upgrade to take on AI workloads. The EuroHPC JU is spending €28 million to upgrade Leonardo to include new GPUs, CPUs and "high-bandwidth mem Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, suggest ideas, and even draft code. However, despite these impress Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing concerns about the availability of resources—a challenge remin Read more…

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after an abysmal second-quarter earnings report with critics calli Read more…

AI Helps Researchers Discover Catalyst for Green Hydrogen Production

September 16, 2024

Researchers from the University of Toronto have used AI to generate a “recipe” for an exciting new catalyst needed to produce green hydrogen fuel. As the effects of climate change begin to become more apparent in our Read more…

The Three Laws of Robotics and the Future

September 14, 2024

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First introduced in his 1942 short story "Runaround" from the "I, R Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

The Three Laws of Robotics and the Future

September 14, 2024

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First i Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI rep Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

AWS’s High-performance Computing Unit Has a New Boss

September 10, 2024

Amazon Web Services (AWS) has a new leader to run its high-performance computing GTM operations. Thierry Pellegrino, who is well-known in the HPC community, has Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Leading Solution Providers

Contributors

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire