HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

By John Russell

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is something HPCwire has been doing yearly with the Bioteam consultancy whose “boots-on-the-street” perspective has a practical, insider feel. No surprise, AI figures more prominently in their practice this year, a change marked by Bioteam’s recent hiring of Fernanda Foertter, a former AI guru at Nvidia.

Ari Berman, BioTeam

This year’s conversation included Ari Berman, CEO of Bioteam, Chris Dagdigian, one of Bioteam’s founders, and Mike Steeves, senior scientific consultant. On the docket were processor diversity (AMD is winning while Arm hasn’t made much headway yet in LS); storage and data management (get ready to pay for what you store!); network needs and practices (perhaps not surprisingly there’s a split in practice here between academia and industry); and AI (the mashup of hype, tire-kicking, and real use continues). Part one presented here tackles processors and storage.

But first a brief prologue.

Life sciences has traditionally been a late adopter of HPC technology. The requisite HPC applications (large, tightly-coupled) weren’t there. Also, the healthcare community tends to be conservative (do no harm) preferring proven, cost-effective, and more easily supported IT. Data analytics was the early breakthrough, driven by DNA sequencing’s need for massive parallel processing. Predictive simulation remained more a work-in-progress, hobbled by gaps in basic biology understanding and the lack of sufficiently rigorous (or comprehensive) mathematical descriptions of intricate biological systems.

That picture has changed dramatically during recent years. Not only has the proliferation of instruments generating vast amounts of data mushroomed – recently led by cryo-EM and other imaging technologies – but also a steady deciphering of functional genomics and basic biology has produced more precise descriptions of biological processes that can be turned into improved simulations useful in research and the clinic. Of course, molecular modeling techniques have also advanced. Rather quickly the breadth of computational power used in life sciences expanded.

Using CANDLE deep learning to extract protein folding intermediate states. | National Cancer Institute

Now, AI has burst onto the scene transforming how we think about HPC and becoming a formidable force in life sciences. Not only is AI critical for making sense of the biomedical data flood but also it’s become an important catalyst fusing data analytics and simulation into a blended approach that’s proving remarkably effective. It is possible, for example, to use AI techniques on large video datasets from ‘living’ experiments to derive some of the first principle OD/PDEs to describe mechanistic simulation. (See HPCwire coverage, ISC Keynote: The Algorithms of Life – Scientific Computing for Systems Biology)

Pretty clearly, bio-computational research has come a long way in a fairly short time. In 2015, Bioteam’s Berman estimated ~15-25 percent of biomedical researchers used HPC in one or another form. The next year it was up to ~30-50 percent.

“The last time we talked (2019), we thought it would be up to about 75 percent,” said Berman in this year’s HPC-in-LS review. “Today, I don’t think there is a single modern life sciences research or diagnostics protocol that doesn’t use advanced computing in some way. I would be willing to say it’s somewhere between 95-100 percent of applications require advanced computing in some manner. Some of the older style research [such as] common plate readers and relies on minor statistical analytics probably don’t [require HPC], but I think those days are going by the wayside.

“I say all of this with a subtext that not everyone knows they’re using HPC. The applications, analytics stacks, etc. that front HPC systems make it look like researchers are just using another website or using an application that came with an instrument but it really is using sort of these back-end very scalable systems.”

It may be useful to note a language shift. It used to be that the HPC community and infrastructure were quite distinct from enterprise infrastructure and “non-science” users. Today those worlds are in collision and our ideas about what constitutes advanced computing are changing. AI and accelerated computing are the drivers shaping what’s become a more blended infrastructure. Very recently it’s become common to refer to the datacenter, at least conceptually, as the ‘computational unit’ able to handle a wide variety of previously distinct applications including HPC/AI. Today what constitutes advanced scale computing seems also to embrace HPC.

In one sense life science research embodies this trend as its computational needs have expanded alongside advances in computational technology itself. What follows is part one of our annual two-part look at HPC/AI in life sciences.

PROCESSOR WARS – NOT EXACTLY

The age of CPU dominance isn’t over but the battle for mindshare seems diminished as bioresearch infrastructure consumers chase price/performance in CPUs as they play a reduced role in heterogeneous architectures. Attention has shifted to GPUs – more numerous on a per systems basis and perhaps more impactful in the current scheme of things. Conversely, cutting edge AI-focused accelerators are only being aggressively piloted among big DoE labs, and still need time to mature and settle into niches before gaining wide LS acceptance. To a significant degree these trends in processor use are continuations of last year’s trends.

“The biggest change that we’ve seen is for people buying on premise equipment or the large HPC deals. All of the momentum right now is behind AMD; it has the roadmap, the benchmarking, and pricing,” said Dagdigian. “Intel doesn’t really have the greatest answer for some of these things.”

This accords with AMD’s resurgence in high-end servers broadly and in supercomputers. That said, many are watching Intel’s realignment under CEO Bob Swan and waiting to see how the forthcoming processor (Sapphire Rapids) and XE GPU line perform. The Aurora supercomputer, featuring both Intel GPUs and CPUs, will be the showcase.

Berman reports the success of DoE’s Summit supercomputer, including its ongoing work on COVID-19 research, has drawn positive attention for IBM in the life sciences community. That said, mainstream adoption of Power microprocessor-based systems has been slow and IBM hasn’t said much about upgrading the Power9 chips or provided details for Power10. Also, the OpenPOWER Foundation has moved under the Linux Foundation authority. Time will tell. Berman said, “IBM is really pushing the quantum areas and their cloud architecture and services and software services as a company.” HPC or at least Power could wind up a step child.

Interestingly, Arm’s resurgence in HPC hasn’t yet spread to life sciences. “Life scientists tend to be a little timid when it comes to new architectures. Life sciences is going to wade into the Arm territory when it’s [more established]. The resurgence in HPC in general is real and you may hear some announcements around the time of SC2020,” said Berman.

FPGA adoption in life sciences has been slow according to Bioteam despite abstraction efforts around hardware description languages to make them easier to use and development of Python libraries that could use them. “People just aren’t really seeing the bang for the buck there or really understanding how to incorporate them,” said Berman.

The GPU market is suddenly most interesting. Intel’s plunge into GPUs and AMD’s wins in big HPC systems using both AMD processors and AMD GPUs (Radeon) bear watching according to Bioteam. All agree Nvidia remains solidly ahead and its introduction last week of the Ampere A100 GPU strengthens that position. But price-performance plays well in life sciences and AMD has the edge there. So far AMD had been reluctant to compete with Nvidia in high-end GPU markets but perhaps not for long. It is noteworthy that Nvidia chose an AMD CPU (64-core Epyc) for is DGX-A100 system. And CUDA11 offers Arm64 support. Murky waters here.

Then there’s Intel much-watched GPU gambit.

“I’ll call it a strange surprise, Intel’s forging into the GPU space with Ponte Vecchio (top SKU in the its forthcoming GPU line). It looks like it can hold its own against the others, although, you know, Nvidia is still far ahead. Intel’s whole play is to create a unified platform out of CPU, GPU, storage, memory and software using oneAPI. The promise is that someone could essentially write one piece of software using oneAPI and have it equally processed without any changes to your code on a GPU or a system level CPU. That’s very interesting in some aspects,” said Berman.

At the moment, use of exotic accelerators like the Cerebras wafer-scale chip are only priorities at big testing centers such as Argonne National Laboratory, which in fact is aggressively testing as many new AI accelerator chips as it can get its hands on according to Rick Stevens, ANL’s associate laboratory director, life sciences, computing, and environment. The Cerebras chip is enormous – 1.2 trillion transistors, 400,000 AI cores. ANL has already put the Cerebras chip to work on COVID-19. More mainstream life science researchers will wait.

Cerebras AI Chip

Berman joked, “The Cerebras chip is like the size of my head, right? It is an amazing engineering feat and also a big stunt. Back to your question about these chips generally. Other than really bleeding edge problems, like some of the cancer problems they’re trying to solve in Cancer Moonshot or real-time processing of diagnostic data against known data, those sorts of things that are being worked on, there’s not a lot of application for [these chips] yet in our space. Keep in mind, you know, it took a life sciences 20 years to adopt GPUs massively.”

Steeves added, “Even with GPUs. We want to get new exciting and interesting [devices] but then you have to start rewriting codes to take advantage of it. Suddenly you see a lot less interest and demand for it. It’s probably going to take a few years for someone to put together that killer app for a particular hardware accelerator, or perhaps when there’s a paper that so interesting that I want to try it and software’s available.”

Berman noted, “In life sciences, a major step forward in utilizing things like coprocessors and better algorithms happens when someone else does the hard work of developing them. That’s because NIH doesn’t fund things like that. You know, grants aren’t going to cover multi-year development arcs for optimizing algorithms for GPUs. The only thing they cover are the results coming out work that could be published. So the incentive also isn’t there.”

STORAGE & DM – TAMING THE WILD WEST?

Storage and data management are perennial challenges in life sciences. Lattice light-sheet microscopes, for example can generate on the order of 2 or 3 terabytes in a couple of hours and they are just one of many imaging instruments generating vast datasets. Fill a room or floor with these kinds of instruments and pretty quickly you’ve generated a lot of data. Today though, the problem isn’t so much selecting and deploying needed storage capacity – that’s mostly a solved problem according to Bioteam. It’s managing the data.

Think about the growing use of machine learning and deep learning to mine all of this data for meaningful models and traditional analytics. The old garbage in-garbage out mantra applies. Beyond data quality, there’s all the meta-tagging that needs to be accomplished and tracked. Also, the data needs to be broadly accessible to collaborators and other researchers while maintaining security and confidentiality.

Chris Dagdigian, Bioteam

Focusing on storage policy, Dagdigian offered three observations and sounded almost like a revival tent preacher:

  • “There’s a practice I fully intend to steal from the DoE and the supercomputing sites. When NERSC rolled out its new all-flash 30 petabyte NVMe storage array, one of the striking things about the announcement is they are moving to no home directories at all, or no home directories of any considerable size for anybody. 100% of the new petabyte scale storage is being allocated. That’s something that I want to see pushed more in enterprise. One of the single biggest problems with the data mess we have is too many people are storing crap in their own directories, project-based, team-based stuff. It’s to the point where individual scientists might have 10-20 terabytes of stuff sitting under a home directory. That’s not findable. It’s not easily shareable. We are now at the point where personal storage is no longer on the table. If you want more than 500 gigs, we’re allocating it and it’s got to be from a project. It’s got to be in a particular area, and it’s going to follow a naming convention, a data standards convention, and you’re going to have to justify the allocation.
  • “The second thing is – I think I stole this from Amazon’s messaging around their shared responsibility model – is a phrase we’ve started to use in an assessment report that we wrote a couple months ago. [It’s] that storage is a consumable resource and it should be treated exactly the same way as an expensive laboratory consumable, something that’s no longer free or unlimited, it’s no longer on demand. Just like you’re budgeting for your reagents and your assay kits and other stuff you’re buying for your lab. That means scientists are budgeting for it, planning for it, and more importantly, they have to justify their consumption.
  • “The third and final thing is around data management, data organization, and data curation. I’ll repeat my standard buzz phrase; “If you’ve got a petabyte, and you don’t have a full-time human being managing or curating the data, not only are you wasting more in hardware cost and the cost of that data curator but you’re also setting yourself up for a lot of gnarly data management, data discovery, and data dissemination issues down the road. Bioteam has seen more storage environments in scientific settings where it almost feels like it’s the Wild West – no rules, no standards, no curation, very few SOPs. I feel like in 2020 that the unmanaged, Wild West petascale storage environment should be the exception not the rule and it’s still the rule.”

Seems like the data management religion has been discussed for years. It will be interesting to see if major changes do indeed occur.

On the storage technology front Berman said, “Not a lot has changed in the last year except this very interesting push and pull war between next generation file systems like WekaIO and Vast Data which have made a surge into this space. It’s a different way of approaching data storage and scalability and more importantly, IO availability, especially in computer architecture. The fascinating thing about those particular architectures for life sciences is they help deal with the high data diversity and IO requirements of various workflows and analytics that come from the wide diversity of data collection used throughout our domain.

“We’ve always said that Lustre is super hard for us to use because we often have millions of small files and Lustre doesn’t do that well. GPFS or Spectrum Scale (IBM) is slightly better if you know how to tune it for that, and you don’t have too much. Outside of that there wasn’t anything you could do until these two things (WekaIO and Vast Data) came up except for dealing with high performance local scratch NVMe in nodes which most people didn’t know how to use.

“So that’s been sort of an interesting shift and now that Optane (Intel) and 3D XPoint (Micron) has become more mainstream and possibly more affordable. That turns into yet another thing that can be wrangled in sort of the data and IO space, especially as a scratch layer that is even faster than anything else out there. So, you know, slow memory but very fast local storage and we’re testing some of that out now. It’s a very interesting space that I think is ripe for yet another innovation.”

Mainstays HPC storage vendors DDN (Lustre) and IBM (Spectrum Scale) still handle a lion’s share of the market. Cray, now HPE, had acquired the ClusterStor line from Seagate in 2017 and debuted a new version ClusterStor E1000 last fall. Berman suggests the traditional storage field generally and its vendors are under pressure for emerging software defined storage alternatives. He says solid state drives continue to displace platter-based technologies. Again, these trends are largely continuations from the past year.

An interesting relative newcomer is Intel’s distributed asynchronous object store (DAOS) which will be used in the Aurora supercomputer scheduled to be the first U.S. exascale system and based at ANL. It will feature Intel CPUs and GPUs (Ponte Vecchio). Intel describes DAOS as “an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications.”

Said Berman, “It’s too new to say much about DAOS but the concept of asynchronous IO is very interesting. It’s essentially a queue mechanism at the system write level so system waits in the processors don’t have to happen while a confirmed write back comes from the disks. So asynchronous IO allows jobs can keep running while you’re waiting on storage to happen, to a limit of course. That would really improve the data input-output pipelines in those systems. It’s a very interesting idea. I like asynchronous data writes and asynchronous storage access. I can see there very easily being corruption that creeps into those types of things and data without very careful sequencing. It will be interesting to watch. If it works it will be a big innovation.”

HPCwire will publish Part 2 in the near future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Chicago Researchers Generate First Computational Model of Entire SARS-CoV-2 Virus

January 15, 2021

Over the course of the last year, many detailed computational models of SARS-CoV-2 have been produced with the help of supercomputers, but those models have largely focused on critical elements of the virus, such as its Read more…

By Oliver Peckham

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Roar Supercomputer to Support Naval Aircraft Research

January 14, 2021

One might not think “aircraft” when picturing the U.S. Navy, but the military branch actually has thousands of aircraft currently in service – and now, supercomputing will help future naval aircraft operate faster, Read more…

By Staff report

DOE and NOAA Extend Computing Partnership, Plan for New Supercomputer

January 14, 2021

The National Climate-Computing Research Center (NCRC), hosted by Oak Ridge National Laboratory (ORNL), has been supporting the climate research of the National Oceanic and Atmospheric Administration (NOAA) for the last 1 Read more…

By Oliver Peckham

Using Micro-Combs, Researchers Demonstrate World’s Fastest Optical Neuromorphic Processor for AI

January 13, 2021

Neuromorphic computing, which uses chips that mimic the behavior of the human brain using virtual “neurons,” is growing in popularity thanks to high-profile efforts from Intel and others. Now, a team of researchers l Read more…

By Oliver Peckham

AWS Solution Channel

Now Available – Amazon EC2 C6gn Instances with 100 Gbps Networking

Amazon EC2 C6gn instances powered by AWS Graviton2 processors are now available!

Compared to C6g instances, this new instance type provides 4x higher network bandwidth, 4x higher packet processing performance, and 2x higher EBS bandwidth. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Honing In on AI, US Launches National Artificial Intelligence Initiative Office

January 13, 2021

To drive American leadership in the field of AI into the future, the National Artificial Intelligence Initiative Office has been launched by the White House Office of Science and Technology Policy (OSTP). The new agen Read more…

By Todd R. Weiss

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Intel ‘Ice Lake’ Server Chips in Production, Set for Volume Ramp This Quarter

January 12, 2021

Intel Corp. used this week’s virtual CES 2021 event to reassert its dominance of the datacenter with the formal roll out of its next-generation server chip, the 10nm Xeon Scalable processor that targets AI and HPC workloads. The third-generation “Ice Lake” family... Read more…

By George Leopold

Researchers Say It Won’t Be Possible to Control Superintelligent AI

January 11, 2021

Worries about out-of-control AI aren’t new. Many prominent figures have suggested caution when unleashing AI. One quote that keeps cropping up is (roughly) th Read more…

By John Russell

AMD Files Patent on New GPU Chiplet Approach

January 5, 2021

Advanced Micro Devices is accelerating the GPU chiplet race with the release of a U.S. patent application for a device that incorporates high-bandwidth intercon Read more…

By George Leopold

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Intel Touts Optane Performance, Teases Next-gen “Crow Pass”

January 5, 2021

Competition to leverage new memory and storage hardware with new or improved software to create better storage/memory schemes has steadily gathered steam during Read more…

By John Russell

Farewell 2020: Bleak, Yes. But a Lot of Good Happened Too

December 30, 2020

Here on the cusp of the new year, the catchphrase ‘2020 hindsight’ has a distinctly different feel. Good riddance, yes. But also proof of science’s power Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This