Cloud Sparking Rapid Evolution of Life Sciences R&D

By Bruce Maches

April 6, 2011

The increasing adoption of cloud computing in its various forms is having a dramatic impact on the way life-science CIO’s provision the applications needed by their organizations to support the R&D process.

Life science research has always been a very complex and time consuming endeavor requiring a broad and diverse set of systems and computer applications. The complexity and resource requirements of these specialized applications has grown so tremendously that many life science companies are struggling to afford to internally build, implement, and support much of the required systems and infrastructure.

Before the cloud, life science companies would just throw more resources, storage, computational and people, at the problem. This is simply no longer possible in today’s economic climate and cloud technologies can provide a viable alternative.

While I have written in more detail about many of the topics herein as part of my extended series of pharma R&D-focused entries over the last year, I wanted to provide some high level information on the methodology of life science R&D and how IT supports that process. Where appropriate I will refer to the specific blog article where you can find additional information. 

Although the R&D process differs by type of life science company, for example, drug research versus internally used devices (heart valves etc) versus external devices such as a knee brace, the overall steps are basically the same:

– Research/Discovery: identifying the disease mechanism, compound, target, genome or device needs

– Development: developing and refining the compound, therapeutic, or device

– Phase 1, 2, &3 Clinical Trials: performing the requisite safety and effectiveness testing of the compound/device

– Regulatory Approval: seeking FDA consent to market the drug/device

– Post Approval Monitoring: tracking the use of the new product, outcomes, and any patient adverse events which must be reported to the FDA

All of these steps involve a multitude of activities and can take years of intensive effort to complete. Each area requires a variety of specialized systems to support the process and to capture and manage the data being generated. The number of systems and processes to support can be quite large depending on the type and complexity of the research being performed.

The life science industry is facing many other challenges besides just increased technology needs and complexity. The entire industry is under intensive revenue pressures as insurance companies and policy makers try to rein in ever increasing health costs by demanding discounts or simply reducing reimbursement levels. The cost to develop a major new drug is close to $1 billion and around 10 years to complete and most potential drug research projects are abandoned at some point during the development process due to unexpected side effects or insufficient efficacy. Industry averages show that out of 1,000 potential compounds identified in the discovery phase as worth pursuing only 1 will make it through the process, be granted market approval, and actually be sold.

On top of this dismal success rate, the FDA has increased its scrutiny of new drugs asking for more in-depth safety studies and trials before granting approval. The exclusivity period or patent life only lasts for a set number of years and the clock starts ticking long before approval is granted. On average a newly approved drug has about 10 years on the market before competitors can start marketing their own versions.

Many large pharmaceutical companies are dealing with impending major revenue shortfalls as popular drugs come off patent and are open to generic competition. Just in 2011 alone there are over a dozen name brand drugs representing nearly $13 billion in revenue coming off of patent protection. The largest by far of these is Pfizer’s cholesterol drug Lipitor which generates over $6 billion in revenue.

All of these factors have created an increase in risk depressing investment in new drug startups while increasing the pressure on existing companies to bring their new therapeutics to market as quickly as possible. These economic realities have forced life science companies to find ways to reduce costs while at the same time increasing productivity and reducing time to market for new products.

As for the life science CIO, there are multiple challenges to be dealt with on a daily basis. Not only are these CIO’s being asked to do more with less they also must deal with issues such as:

– An aging portfolio of legacy systems that must be kept in service as they contain critical data, the FDA requires that any data related to a drug be kept at least 2 years after it was last sold (think of the challenge of dealing with something like aspirin)

– Ensuring continued regulatory compliance for system related issues per FDA & HIPAA along with applicable foreign regulatory guidelines

– Pressure to reduce budgets while meeting increasing needs from the business for responsiveness and agility

– The need to reduce time-to-market for new therapies as every day of market exclusivity can potentially mean millions of dollars in revenue

– Continual vendor and technology changes in the marketplace

– Increasingly complex and resource intensive applications along with the explosion of data in R&D. The amount of data managed by life science companies nearly doubles every 3 months

The combined IT spend for life science companies in the US is over $700 billion per year. Overall budgets have remained flat the last 2 years meaning that life science CIO’s, like many of their counterparts in other industries, must do more with less while increasing flexibility and responsiveness to meet business needs.

Regulatory Compliance

One of the more complex and time consuming issues that the life science companies CIO’s have to deal with is ensuring that their systems and applications are in compliance with regulatory agency guidelines. In the US, the FDA not only provides guidance on how drugs are developed, manufactured and marketed but also on how the supporting systems must be tested and validated in order to ensure that the data (i.e.: results) contained in them can be trusted and is accurate. In a nut shell this requires that any system containing product data, clinical testing information or used to create submissions for regulatory approval must be validated as described in the Code for Federal Regulations (CFR) 21 Part 11.

How to deal with compliance is a major concern when any new applicable systems or major modifications to existing validated ones are being planned.  Project plans for new systems must incorporate a significant amount of time to build compliance in while upgrades to existing systems can require partial or complete re-validation. This can very expensive which means that many systems are not retired but kept in service for much longer than would normally be expected. You can read more about this aspect of life science IT in this more thorough exploration of the topic.

Given the critical nature of regulatory compliance, life science CIO’s must include it as a key piece of their overall cloud strategy or they will face a much more difficult road as they move to the cloud. Even worse, they may follow a path that may impact their ability to ensure compliance in their cloud strategy leaving them open to issues being raised during FDA audits.

Impact of Cloud Computing

The first portion of this article was meant to give the reader an overall understanding of the current state of IT in the life sciences, the process, issues, and some of the challenges. While the impact of cloud computing is very similar across a number of industries, I will now address the effect that cloud computing is having on life science companies.

Costs

The cost considerations regarding the delivery of information technology services are certainly not unique to the life sciences environment. All CIO’s are continually dealing with budget considerations while having to rationalize and justify the expense it takes to design, build, provision, and support the systems and applications that their users need. In the life sciences IT can consume an inordinate amount of the total operational budget compared to other industries. This is not surprising given that the R&D process is so data driven. While I was at Pfizer Global R&D the IT budget consumed over 15% of the total R&D expenditures and about 8% of the total headcount of the organization. This high level of resource consumption, while maybe necessary, does take away from the organizations core mission which is the science of drug discovery and development.

So, how can cloud computing help life science CIO’s with bringing their costs down? There are a number of ways and I will describe a few of them briefly below.

IAAS is a key component for reducing direct IT costs and overall TCO of applications. Provisioning of new hardware along with the data center space, power and support personnel is a major component of the CIO’s budget.  Life science CIO’s should have a clear understanding of their application and project portfolio so they can leverage the technology to reduce infrastructure costs. Using public cloud infrastructure for non-validated applications especially ones that are very ‘bursty’ in their resource needs will save thousands in hardware and support costs. Private cloud for internal apps can be leveraged to handle those applications requiring a validated or controlled environment. By having a defined application deployment strategy life science CIO’s can significantly reduce costs for the hardware and supporting infrastructure.

SAAS (as described below) can also be a major cost saver for the life sciences. Many vendors are offering specialized R&D applications as validated SAAS systems. A client of mine is a medium sized bio-tech that is doing clinical trials on their new drug. One of the major tasks in getting FDA approval is collecting, storing, collating all of the data and documents that will be a part of the NDA (New Drug Application). Normally companies like this would purchase a document management and publication system along with the supporting hardware and administrative resources. My client has chosen (as many other are) to use a SAAS based document management tool to store their documents and a similarly provisioned publishing tool to pull the documents together and create the files that will be electronically submitted to the FDA. Just by doing this not only were they able to bring this functionality on-line almost immediately but they also save tens of thousands of dollars over doing it in-house.

Disaster recovery and cloud backup are also areas where significant cost savings can be realized. Virtual images can be built of critical applications allowing for temporary running of these systems in the cloud in case of a disaster. Also, one of the major components of data management, backup and restore, can be mitigated by utilizing cloud backups as a part of the overall offsite backup strategy. This not only saves on manpower and hardware but also for backup tapes and offsite storage.
 
Cloud computing can not only reduce costs it also allows the IT group to be much more agile and responsive to their organization and to more quickly deploy the systems and applications to support R&D. The goal is to ultimately get new therapies out the door much quicker not only saving money but also increasing revenues by getting drugs to market quicker and extending the exclusivity period.

Regulatory Compliance

A significant amount of the expense and effort in any life science IT shop goes into ensuring that the systems and applications deployed comply with the appropriate regulatory guidelines. While there are a number of ways that cloud computing can assist with compliance there are two major areas where cloud technologies can have the biggest impact.

Validating and supporting the operating infrastructure of a system is a major component of the compliance effort. Building a validated private cloud environment will allow for the leveraging of the compliance costs over multiple systems. This reduces the hardware costs, data center footprint, and support requirements. Life science CIO’s should examine their portfolio of applications to see which ones can be moved to virtual environments and to deploy new systems only in virtual mode.

Legacy Systems

The management and maintenance of existing legacy systems is a huge headache for the life science CIO. In large IT shops a majority of the personnel and funding goes to supporting systems that have been in service for 5-10 even 15 years. It is not unusual to walk into a big pharma data center and see nameplates from companies past, Wang, DEC, and Compaq systems are just a few that I have seen recently. The primary cause of this is the time and expense that went into validating these systems when they were first brought on-line. There is usually budget for new systems but not budget to either re-validate upgraded systems or provide a validated method for transferring data from a system being retired to a new application. Instead, quite often, new systems are deployed and layered on top of existing applications which must be kept alive as they contain critical information that must be available on demand.

While cloud computing is no miracle cure for this problem a potential solution would be to create a validated private cloud environment and build the appropriate VM flavors to move these legacy applications into a virtual state. This would allow the life science CIO to retire the old hardware and free up both the support resources and space in their data center, as I touched on in this entry.

SAAS

Many firms that sell specialized applications into the life science space are now provisioning their applications via the SAAS model. While SAAS has been a around quite a while in a variety of forms, it is the ability to quickly provide their users with state of the art applications that is appealing to a life science CIO. Beyond leveraging the normal advantages of SAAS life science CIO’s can access application environments that are either pre-validated or is a semi-validated state significantly reducing the resources and time required to provision a new application. Over the last year I’ve weighed in quite a bit on the role of SaaS solutions in the industry.

Impediments

So what are some of the factors impeding the adoption of cloud computing in the life science?  Not surprisingly they are the usual suspects, questions around security, protection of intellectual property, vendor lock-in, latency etc. The biggest factor is how to deal with validation in an environment that is not under your control. Public clouds can be difficult at best to provide the necessary validation required although some IAAS vendors are contemplating building private cloud offerings that would include validation of the physical environment and there are companies offering pre-validated software images that can be loaded on demand.  This type of pre-validated hosted environment would be extremely appealing as they greatly reduce the cost and effort to deploy new R&D applications, which is rehashed in more detail here.

Start Ups & Small Bio-techs

Some of you may get the impression that only large companies can appropriately leverage cloud computing. While it is true that larger IT shops may gain more from cloud computing small companies can also benefit from utilizing cloud technologies. Many smaller companies have as a core strategy that as much of their IT needs as possible will be fulfilled by the cloud first and internally second. In a way smaller companies have an advantage as they do not have an inventory of legacy applications or entrenched people and processes to deal with.

I have two small bio-tech clients using this strategy. One has to an extreme where if you went into their offices all you would find are several wireless access points and printers. There are no servers, no desktops, no phones, and no need for IT administrative support. Everybody brings in their laptop and cell phone and all normal IT services are provided via or IAAS SAAS vendors. Even their phone system is a SAAS provisioned VOIP PBX that is connected to their cell phones. My other client has different needs but still a major portion of both their infrastructure and applications are accessed via the internet, a matter I discussed a while back.

In Closing

So – how is cloud computing impacting the life sciences IT organization? Certainly the changes being wrought by cloud computing are not unique to the life sciences but cloud is changing how life science CIO’s provide the systems needed by their users. Utilizing cloud as a integral part of their overall IT strategy has given life science CIO’s a major tool for reducing costs, being able to quickly response to user needs, ease the burden of regulatory compliance and support the complex process of life science R&D.

Many forward thinking CIO’s are already incorporating cloud into their current application portfolios and long term strategic plans. Those that have a clear and direct strategy for utilizing cloud and are aggressively looking at cloud technologies to help them with their problems will be much more successful than those who are flying by the seat of their pants.

Now what does the future hold? It would be great to be able to look at the future 5 years down the road to see how cloud has been adopted and how it is being utilized in the life sciences. Certainly cloud will be an integral part of the CIO’s portfolio and a much larger portion of the budget will be allocated to cloud computing technologies than we are seeing today. There can be no doubt that the life science IT shop of 2016 will be much different than today, not only in technology and infrastructure but also in staffing and required skill sets.

One thing is clear; cloud is a game changer and a technology that all life science companies need to embrace to remain competitive. Those that do not adapt to the cloud will find themselves unable to keep up or remain competitive with those that do.

About the Author

Bruce Maches is a former Director of Information Technology for Pfizer’s R&D division, current CIO for BRMaches & Associates and a contributing editor for HPC in the Cloud.

He has written extensively about the role of cloud computing and related technologies in the life sciences in his series in our Behind the Cloud section,

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This