Visit additional Tabor Communication Publications
February 25, 2009
As the economic recession digs in, HPC looks like it's in for a rough ride for at least the next 18 months. But even while HPC capital expenditure budgets are getting cut or frozen, renting HPC cycles in the cloud never looked so good and customers are starting to catch on.
One of those customers is Pathwork Diagnostics, a six-year-old biotech startup specializing in cancer diagnostic products. Pathwork combines DNA microarray technology with machine learning software to help identify cancer types. As a relatively-small company of 35 employees, Pathwork is constrained as to how much it can spend on IT infrastructure. But the emergence of commercial cloud computing along with the latest gene chip technology is opening up new opportunities for these types of firms.
Until recently, microarray chips have been generally used for genomic research. But applying the technology to tumor tissue provides a detailed view of a cancer's gene expression profile. Running that profile through Pathwork's software enables scientists to classify the source of the cancer -- lung, kidney, breast, prostate, etc. -- with the idea to apply that knowledge to clinical treatment.
The majority of cancers don't require such sophisticated technology, although it's not simply a matter of finding a tumor with an MRI and concluding the local site is the origin of the cancer. For example, a liver tumor may actually have its origin as lung cancer, as a result of metastases. Even so, sometimes visual examination of biopsied tissue isn't enough for identification. In that case, more sophisticated diagnostics like immunohistochemistry (IHC) can be used. IHC tests are able to detect proteins in tumor tissue that can be mapped to specific types of cancer. However sometimes even these tests fail to provide a definitive answer. About 5 to 10 percent of all cancers fall in this category.
That's where the Pathwork solution comes in. The company's "Tissue of Origin" test measures a specimen's RNA expression pattern of more than 1,500 genes. The resulting data is run through machine learning algorithms, which compare the expression profile to 15 known tissue types to help determine to the cancer's origin.
The output of the diagnostic is a simple table of numbers that rank the probability of the type of cancer. For example, the application might give a score of 80 to lung cancer, 10 to breast cancer, 5 to kidney cancer, and so on. Ljubomir Buturovic, chief scientist at Pathwork, says they typically get a score of 70-80 for the most likely tissue match, which provides a good basis for treatment. In clinical trials, he says they achieved 89 percent accuracy at identifying the cancer source.
"So our product is in essence a classifier," explains Buturovic, "which takes the gene expression measurements from a tumor and produces a score that predicts the probability that the cancer originated in a particular organ or tissue type." Buturovic says this information may aid the oncologist in recommending a targeted treatment corresponding to the specific cancer.
As one might imagine, the cancer identification algorithm software requires a good deal of computing horsepower. Pathwork has been maintaing its own 120-core cluster, consisting of dual- and single-socket x86 compute nodes, for both diagnostic work and research. But inevitably the company found it needed more computing capacity to handle the growing number of jobs. After looking at the capital expenditure of expanding its computing capacity in-house versus renting cycles from a service provider, it became convinced that the service model made a lot more sense to the company.
Buturovic says the deciding factor was that Pathwork had a peak computing demand about once every three months, which would have required a capital expenditure "prohibitively large" for a company its size. In addition, some the algorithms it employs are very demanding when applied against its data sets, and would take months to execute on the in-house cluster. So offloading this type of work to a larger cluster would save quite a bit of time.
Since Pathwork was already using Sun Microsystems' open source Sun Grid Engine (SGE) for cluster load balancing, the company originally considered using Sun's Network.com utility computing grid. But the $1/CPU-hour price was too steep for Pathwork. At some point, it heard about Amazon's Elastic Cloud (EC2) cloud platform, with its more "convenient" pricing of just $0.10/CPU-hour. Probably the most well-known cloud computing platform in the world, EC2 provides scalable utility computing for a wide range of application types.
Pathwork came to Univa UD when it learned that the company's UniCloud solution supported the Sun Grid Engine on the Amazon cloud. Introduced in December 2008, UniCloud allows a cluster to be provisioned in the Amazon EC2 infrastructure as an extension of Univa's UniCluster job scheduler. UniCloud can either extend a local cluster into the Amazon cloud or simply provision a stand-alone cluster entirely on Amazon hardware. In the case of Pathwork, the company chose the latter model.
Using the UniCloud/EC2 platform, Pathwork’s applications are now running on up to 500 cores in the Amazon cloud and have garnered a 4-5x increase in speed. Since some of Pathwork's larger jobs can take months to execute in-house, using Amazon's resources reduces that time to just weeks, says Buturovic.
Based on Pathwork's usage pattern, the company calculated that it would save two-thirds of the cost by running its peak applications in the Amazon cloud versus building and maintaining the equivalent system in-house. Amortizing the system cost over five years and taking into account the EC2 CPU costs plus the consulting services paid to Univa UD, that came out to a savings of around $177,000 per year.
Pathwork may end up buying even more cycles on Amazon, as soon as it figures out how to scale its software beyond 500 cores. At 10 cents per CPU hour, the incremental cost for more computing capacity is rather low compared to the potential savings in turnaround time for its research and diagnostic work.
Univa isn't sitting still either. Although the first version of UniCloud is limited to Amazon EC2, Univa is already in talks with other cloud providers to extend the software to more utility computing platforms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.