Visit additional Tabor Communication Publications
November 06, 2012
SEATTLE, WA - Nov. 6 - Pico Computing announced today that the collaboration with researchers at the University of Washington has yielded up to 90X acceleration of Infernal. Infernal is a software package commonly used to identify Non-Coding Ribonucleic Acid (ncRNA). Infernal can take upwards of a few weeks to complete on commodity CPUs. However, with Pico's FPGA accelerated solution, this identification process has been reduced to less than a day resulting in an up to 90X improvement.
In the traditional model of molecular biology, DNA is transcribed to form RNA, which in turn is translated into proteins. These proteins perform many of the functions essential to biological life. In recent decades, biologists have come to understand the importance of ncRNAs, which directly perform roles normally associated with proteins including regulating genes and catalyzing reactions. Further study of ncRNA may result in breakthroughs in the areas such as cancer, Alzheimer's, and Parkinson's disease.
In the past ten years, the number of modeled ncRNA families identified has increased by two orders of magnitude. Many ncRNA's bases have purely structural roles requiring two potentially distant bases to be complementary. This makes the search for ncRNA more difficult than traditional DNA sequence matching. Growth of these ncRNA families as well as computational complexity of searching genomes for known ncRNA has resulted in runtimes on the order of weeks.
Researchers at the University of Washington, using Pico's M-503 FPGA module, have accelerated algorithms within Infernal. This implementation yielded individual algorithm speedups of up to 200X, for an overall software acceleration of up to 90X.
"The last decade of genome research has yielded a flood of novel non-protein-coding RNAs (ncRNAs) with diverse biological functions, and tantalizing hints of thousands more. Pico's hardware has enabled us to reach new levels of acceleration for key computationally intensive algorithms needed to fully explore this important new landscape," said Dr. Walter L. Ruzzo, Professor at the University of Washington.
Nathaniel McVicar, researcher at the University of Washington, will be presenting this research at the Exhibitor Forum during SC12 (Thursday, November 15th, room 155-B at 11:30AM). To find out more about Pico's products and solutions, stop by booth 2107 during SC12.
About Pico Computing
Based in Seattle, Washington, Pico Computing specializes in highly integrated development and deployment platforms based on Field Programmable Gate Array (FPGA) technologies. Applications for Pico Computing technologies include cryptography, networking, signal processing, bioinformatics, and scientific computing. Pico Computing products are used in embedded systems as well as in military, national security and high performance computing applications.
About University of Washington
Located in Seattle, Washington was founded in 1891. The University of Washington is one the oldest state supported institutions on the West Coast and is one the preeminent research universities in the world.
For 24 years, SC has been at the forefront in gathering the best and brightest minds in supercomputing together, with our unparalleled technical papers, tutorials, posters and speakers. SC12 will take a major step forward not only in supercomputing, but in super-conferencing, with everything designed to make the 2012 conference the most 'you' friendly conference in the world. We're streamlining conference information and moving to a virtually real-time method of determining technical program thrusts. No more pre-determined technical themes picked far in advance. Through social media, data mining, and active polling, we'll see which technical interests and issues emerge throughout the year, and focus on the ones that interest you the most.
Source: Pico Computing
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.