Bioinformatics Lab Pursues Personalized Drug Treatments

By Nicole Hemsoth

November 3, 2006

As bioinformatics becomes an indispensable tool for performing advanced research into the origins and treatment of disease, more labs, both commercial and non-commercial, are investing in high performance computing platforms to help speed drug discovery. The Computational Bioinformatics and Bioimaging Laboratory (CBIL) at Virginia Tech is one such organization. The lab is hosted by the school's Advanced Research Institute (ARI), a group devoted to multidisciplinary scientific research.

Featured Podcast

CBIL is also a participant of the cancer Biomedical Informatics Grid, or caBIG (http://cabig.nci.nih.gov), which is a voluntary network connecting individuals and institutions to enable the sharing of data and tools, creating a World Wide Web of cancer research. The caBIG software platform enables U.S. cancer researchers to analyze their molecular expression data and provides the research community access to a federated Grid of informatics tools. The Grid also provide access to data from the whole community, allowing for more comprehensive research. The goal is to speed the delivery of innovative approaches for the prevention and treatment of cancer.

CBIL represents one of the emerging trends in medical research — that of the specialized bioinformatics provider. Virginia Tech does not have a medical university, so CBIL partners with other institutions such as Georgetown Medical Center, Johns Hopkins University, NIH and local hospitals. These institutions offer their medical resources and raw data, while CBIL provides the computer and bioinformatics expertise to extract useful knowledge from that data.

Research at CBIL focuses on data modeling and molecular analysis of diseases such as cancer (breast, prostate and ovarian), muscular dystrophy and cardiovascular diseases. The lab performs a technique called “molecular classification” to qualify various diseases. This is accomplished by applying large-scale computational techniques to identify biomolecular markers associated with a disease state. The research allows scientists to determine how specific drugs affect those markers.

“We hope that over time these markers will serve as indicators for diagnosis as well as prognosis, which will help in drug discovery and novel therapies,” says Dr. Saifur Rahman, director of the Advanced Research Institute. “And if we are successful in getting better drug discovery, we'll be able to treat a wider variety of cancers — hopefully more efficiently.”

According to Dr. Rahman, the technology that has made this type of medical research possible has really just emerged in the last five to ten years. He says there are three technologies that are fundamental to this new model:

  1. Personalized Molecular Profiling: Based on their genetic makeup, their physical condition and their environment, individuals are affected by diseases differently and respond differently to identical therapies. The profiling strategy is to identify disease subtypes within a heterogenous disease type. This will help to enable personalized drug therapies to be developed.
  2. Computational Systems Biology: Computer modeling and simulation is being used to replace laboratory experiments. By replacing physical experimentation with virtual experiments, a much wider range of “what if” investigations can be attempted. This technology is especially useful in trying to identify the most likely disease pathways.
  3. Biomedicial Imaging: Advanced visualization is providing a powerful tool for in vitro disease detection and diagnosis.

At CBIL the computing infrastructure that supports this work consists of a 16-node HP cluster, running Microsoft's Windows Compute Cluster Server (CCS) 2003 as the cluster management platform. The lab was one of the early adopters of CCS, which was released for general availability in August of 2006. Prior to the cluster solution, CBIL ran the bioinformatics applications on several single-node servers, distributing the jobs across them. According to Dr. Rahman, this required much more prep time, since it involved dividing the computational work into pieces and then reconstructing them after the results were obtained.

The server cluster enabled a more efficient computational model. CBIL recognized an 85 to 90 percent reduction in run times on two key applications, the Robust Biomarker Discovery and Predictor Performance Estimation, when compared to the single-node server set-up. In addition, the use of CCS allowed the researchers to remain in the familiar Windows environment, allowing for a comfortable transition to a parallel computing platform.

The advanced technology being used at CBIL provides a new model for medical research. In contrast to monolithic “magic bullet” approaches to cancer and other life-threatening diseases, molecular classification provides a discovery pathway for truly personalized medicine. The ability to qualify differences in drug responses on an individual basis will make it possible to find more effective drugs treatments. And the use of high performance computing is enabling this research to progress at a much faster rate than ever before.

“HPC will allow us to analyze more closely and in finer detail such patient responses to different drug regimes,” explains Dr. Rahman. “But the challenge will still remain as to how we interpret the data we get from our high performance computers.”

—–

To hear the whole story behind the Computational Bioinformatics and Bioimaging Laboratory research work, listen to our HPCwire podcast interview with Dr. Saifur Rahman at http://www.taborcommunications.com/hpcwire/podcasts/microsoft/index.html. For additional background information about the lab, visit http://www.cbil.ece.vt.edu/.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Doug Kothe on the Race to Build Exascale Applications

May 29, 2017

Ensuring there are applications ready to churn out useful science when the first U.S. exascale computers arrive in the 2021-2023 timeframe is Doug Kothe’s job Read more…

By John Russell

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurr Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

Nvidia CEO Predicts AI ‘Cambrian Explosion’

May 25, 2017

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing pl Read more…

By George Leopold

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

Doug Kothe on the Race to Build Exascale Applications

May 29, 2017

Ensuring there are applications ready to churn out useful science when the first U.S. exascale computers arrive in the 2021-2023 timeframe is Doug Kothe’s job Read more…

By John Russell

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" process Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This