DRC Energizes Smith-Waterman, Opens Door to On-Demand Service

By Nicole Hemsoth

February 9, 2011

News emerged recently that might reshape how genomics researchers think about the speed and accuracy of gene sequencing analysis projects that rely on the Smith-Waterman algorithm.

Sunnyvale, California-based coprocessor company, DRC Computer Corporation, announced a world record-setting genetic sequencing analysis appliance that was benchmarked in the multi-trillion cell updates per second range—a figure that could have gone higher, according to DRC’s Roy Graham. Although similar claims to supremacy have been made in the past, the company states that this marks a 5x improvement over previously published results.

While it might be tempting to think this is just another acceleration story about toppling old benchmarks, this one does have something of a unique slant.

While one of their FPGAs has the equivalent performance of 1000 cores, and this is interesting in itself, the company has advocated that there is a defined cloud computing angle since ideally, their FPGA-based Accelium board can be plugged into a standard x86 server via standard PCIe slots.

DRC claims that the “time and cost to complete [gene sequence analysis] can be reduced by a factor of 20 using standard Intel-based servers installed with their own DRC Accelium processors running on Windows HPC Server 2008 R2. They suggest that analysis time is sliced in addition to “over 90% the computing cost, power, real estate and infrastructure required to obtain the results.”

The beauty here, as they see it, is that standard commodity hardware can be significantly enhanced in a plug and play fashion that becomes thus cloud-enabled and more accessible to a broader array of potential users than before.

DRC is pitching this solution as cloud-ready when built in a private cloud, which was the environment they chose for their benchmarking effort. All debates about the validity (or newness) of private clouds aside, there could be changes coming for life sciences companies who want to make use of Smith-Waterman but have been barred due to the high costs of running this hungry algorithm in-house.

Roy Graham from DRC stated that the cloud value of the company’s announcement lies in the fact that eventually, many common sequencing services will be cloud-based and right now, what they’re looking at is a very high volume, scalable and cost-effective platform. He claims that the company is currently in discussions with a number of cloud services companies and at this point, what they’re looking for is a proof point.

DRC claims that due to the inherent parallelism of their reconfigurable coprocessors, such solutions are extremely scalable and adaptable to modern cloud computing environments where computing resources can be shared across multiple users and applications.

According to Steve Casselman, CEO of DRC Computer Corporation, there is definitely a future in the clouds for Smith-Waterman. During a conversation with HPC in the Cloud last week, he speculated on the concept of a “corporate biocloud” where users will be able to run Smith-Waterman on as much the hardware as needed while at the same time running other processes in an on-demand format. This is what he calls an example of “acceleration on-demand,” noting that there are several different algorithms ripe for this kind of capability.

Casselman insists that the main takeaway is that “it doesn’t require a very controlled environment to build this type of network or structure so it lends itself very well to a general cloud environment.”

There are some solid reasons to support efforts to refine the infrastructure concerns for an algorithm like Smith-Waterman. It has been around for over two decades and produces refined results, but the user base behind it is small given the high costs of achieving the precise output. This means that companies that want to make use of Smith-Waterman face far higher costs if they require the specificity that other genomic applications cannot match. This could make good cost sense for companies that need the specificity of results but cannot invest up-front for the hardware required.

While Smith-Waterman is considered by some to be the gold standard for this type of work, the associated costs have led to companies using heuristic applications like BLAST to achieve results, in part because it is a cost-efficient fit for modern CPU architectures, according to Steve Casselman, CEO of DRC.

Will Smith-Waterman be delivered as a service (with an application wrapped around it) so more refined results from genetic sequencing projects can be realized by a broader class of researchers and life sciences companies? Would it require a friendly interface and inherent ease of use–and if so, who would champion the middleware cause if it was made attractive enough by efforts from companies like DRC?

Microsoft (which already offers some applications via its Azure cloud to lure in life sciences) might be the source of such a project and did take initialinterest in DRC’s benchmarking effort. The coprocessor company approached them before undertaking the benchmark as they felt that some of their big life sciences customers who were using Windows HPC Server needed benchmarks not based on Linux (although by the way the results between Linux and HPC Server were comparable).

Jason Stowe, CEO of Cycle Computing noted that there is demand for Smith-Waterman as a service and that it can be successful. In a short interview Stowe noted that, “When it comes to Smith-Waterman, we have nVidia GPU-enabled versions (CUDA SW++) deployed on our CycleCloud Clusters-as-a-Service, that accelerate this algorithm to run 10-50x as fast as BLAST on comparable queries. CycleCloud’s ability to start up 64 GPU clusters on Amazon EC2 in 15 minutes enables users to take advantage of both GPU-acceleration, and cloud cost-cutting, to analyze whole genomes using Smith-Waterman, at a fraction of the cost.”

If the algorithm, which produces superior results but has been prohibitively expensive finds the needed acceleration to make it more affordable, demand could rise for the decades-old code, especially if it is being run remotely on a pay-as-you-go basis.

While there is certainly some speculation here, what is clear is that there could be a new slant on old tools to cure diseases. As DRC’s Roy Graham stated, “The FPGA is a means to an end, so although this is a nice FPGA story, the real story is now we have the ability to provide highly accurate assessments of an individual’s affinity to a specific disease condition. In predictive diagnosis, accuracy is key and so far it’s been compromised because of a lack of cost effective computing resources. We now have the platform that can bridge the cost/accuracy divide.”
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

“Lunch & Learn” to Explore the Growing Applications of Genomic Analytics

In the digital age of medicine, healthcare providers are rapidly transforming their approach to patient care. Traditional technologies are no longer sufficient to process vast quantities of medical data (including patient histories, treatment plans, diagnostic reports, and more), challenging organizations to invest in a new style of IT to enable faster and higher-quality care. Read more…

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). A Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This