Complete Genomics Takes Off

By Tiffany Trader (HPC)

October 15, 2008

Last week, San-Francisco-based Complete Genomics came out of stealth mode to become the first provider of large-scale human genome sequencing services. They claim to offer a third-generation genome sequencing technology that generates genomic data at a higher throughput than existing approaches and at lower cost.

What makes Complete Genomics different is that they are offering human genome sequencing as a service through their commercial-scale genome center. The technology and business model combine to enable large-scale human genomic population studies, thereby providing the basis for significant genomic analysis. By gathering and analyzing a large amount of genomic data, an individual’s genetic profile data can be applied to disease prevention and management.

HPCwire recently asked company representatives to share some details about their work. Complete Genomics Chairman, President and CEO Dr. Clifford Reid and Vice President of Software Bruce Martin took the time to respond.

HPCwire: Can you describe the sequencing service and talk about the practical significance of its use in health care?

Dr. Clifford Reid: Complete Genomics is offering the industry’s first large-scale human genome sequencing service for $5,000 per genome. We plan to sequence 1,000 complete human genomes in 2009 and 20,000 genomes in 2010. For the first time, companies and research institutions will be able to run large-scale complete human genome studies to understand the genetic basis of disease and drug response.

HPCwire: Can you provide a brief description of the technology pieces that make this sequencing service possible?

Reid: Complete Genomics has developed two breakthrough technologies that enable us to offer complete human genomes for $5,000. The first is a new method for creating extremely high density DNA arrays, which dramatically reduces the reagent and imaging cost of DNA sequencing. The second is a new ligation method of reading DNA, which dramatically reduces the reagent cost while maintaining the high accuracy of ligase-base DNA sequencing.

HPCwire: What is unique about the business model that allows you to do this?

Reid: By selling services rather than instruments, Complete Genomics is able to eliminate the burden of purchasing and operating complex and expensive DNA sequencing instruments, and eliminate the burden of building and operating a high-performance datacenter.

HPCwire: How big do you think the market is for your service?

Reid: Complete Genomics believes the market for large-scale complete human genome studies will be $3-5 billion in five years.

HPCwire: What would prevent competitors from copying your approach?

Reid: Complete Genomics owns or has licensed 110 patents and patent applications worldwide to protect our technology.

HPCwire: You quote a $5,000 price tag on the service for one genome in Q2 2009. What is the cost of a complete human genome sequence today?

Reid: Complete Genomics sequenced a complete human genome in July for $4,000 materials cost — that does not include equipment, labor, or overhead costs. When we launch our commercial service in Q2 2009 we expect our materials cost to be under $1,000 per genome, and our $5,000 price will cover all of our costs.

HPCwire: Focusing on the computational aspect of the service: What specific types of compute and storage system or systems are being employed to analyze the genetic data?

Bruce Martin: Complete Genomics uses a high-performance computing cluster, built using commodity servers (currently with Intel CPUs), Linux and other open source platform software, and clustered NAS systems.

HPCwire: What software is being used to do this and what is its source – proprietary in-house, commercial ISV, or open source?

Martin: Our system has an an open source platform with Linux, Sun Grid Engine and other management/operations tools. And we have proprietary applications for data analysis, base calling, alignment and assembly, which are all built in-house.

HPCwire: What percentage of the total expense of the infrastructure is represented by the computational infrastructure?

Martin: Computing is roughly 50 percent of the cost today. We expect that to decrease as a fraction of total cost over time.

HPCwire: Do you have HPC expertise in-house to help with the management of the compute resources and the data analysis? If so, could you briefly describe that expertise?

Martin: Yes, Complete Genomics has an experienced multi-disciplinary team, with world-class expertise in bioinformatics, in particular image processing, base calling, alignment, assembly and other areas of sequence analysis. We also have extensive knowledge of high-scale scientific computing, including monte carlo simulation, machine learning, and graph-based algorithms. Other proficiencies include large data set search and indexing, and datacenter operations and management.

HPCwire: Are you counting on projected increases in computational power to drive the growth strategy (1,000 genomes in 2009; 20,000 in 2010), and/or are you also intending to increase computational infrastructure? Or are there other pieces of technology or the business model that you intend to ramp up over the next couple of years?

Martin: Both — Complete Genomics will deploy significantly more infrastructure, and we plan to do so in a modular manner, thereby taking advantage of improvements in basic computing technology, for example, larger/faster disk drives, and faster and lower-power CPUs.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

2017 Gordon Bell Prize Finalists Named

October 23, 2017

The three finalists for this year’s Gordon Bell Prize in High Performance Computing have been announced. They include two papers on projects run on China’s Sunway TaihuLight system and a third paper on 3D image recon Read more…

By John Russell

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This