Visit additional Tabor Communication Publications
January 14, 2013
BLOOMINGTON, Ind., Jan. 14 – Now entering its second year of operation, the National Center for Genome Analysis Support (NCGAS) provides software, expert consultation and computational resources to help life science researchers analyze genome data. NCGAS is expanding its reach by adding tools, services and partners to help biological research communities make important new scientific discoveries.
Today, in conjunction with its participation in the Plant and Animal Genome Conference (PAG XXI), NCGAS announced three initiatives to accelerate US public and private sector research and development in biology and biomedical research:
"Our partnership with the Pittsburgh Supercomputing Center -- paired with Penguin On-Demand resources and optimized software -- will help NCGAS expand services to the US research community in important new ways," said Craig Stewart, dean for IU Research Technologies, executive director of PTI and principal investigator on the NCGAS grant award.
William Barnett, NCGAS director, added, "It's exciting to think of the breakthroughs to come in the years ahead as we continue to provide tools and services to genome researchers, enabling innovative and potentially transformative genomics research."
Led by PTI, NCGAS is funded by the National Science Foundation (NSF) to help researchers retrieve useful biological information from the vast amounts of sequence data generated by research programs. Only in its second year, the center has already served hundreds of researchers and dozens of research projects. NCGAS is making its public debut at the Plant and Animal Genome conference -- one of the world's largest conferences dedicated to genome science.
Its latest initiatives further the center's mission to provide researchers with better bioinformatics support, freeing them to concentrate on the science surrounding their genomics projects -- as opposed to their technology needs. Working with NCGAS also gives researchers access to powerful high performance computers and networks, improving the speed and quality of genome assemblies.
"The partnership between NCGAS and the Pittsburgh Supercomputing Center will significantly extend the capability of researchers to perform challenging genomic analyses," said Phil Blood, senior scientific specialist at PSC. "Researchers using PSC resources have immediately benefited from NCGAS' work and are currently running a previously intractable set of large-scale Trinity assemblies on our 16 TB SGI UV system, Blacklight -- the world's largest coherent shared memory platform."
The National Science Foundation (NSF) awarded Indiana University a $1.5 million grant to establish the National Center for Genome Analysis Support (NCGAS) in 2011. Since that time, NCGAS has offered no-cost services to NSF-funded researchers who use genome assembly software, large-scale phylogenetic software and other genome analysis software requiring large amounts of memory. NCGAS partner institutions include the Texas Advanced Computing Center at the University of Texas Austin, the San Diego Supercomputer Center at University of California, San Diego, the Pittsburgh Supercomputing Center and the Extreme Science and Engineering Discovery Environment (XSEDE).
Led by the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, XSEDE integrates and supports 16 supercomputers and high-end visualization and data analysis resources across the country. XSEDE's integrated, comprehensive suite of advanced digital services will federate with other high-end facilities and with campus-based resources, serving as the foundation for a national cyberinfrastructure ecosystem. Supercomputing time at the centers mentioned here is available through a peer-reviewed application process.
About Pervasive Technology Institute at Indiana University
Pervasive Technology Institute (PTI) at Indiana University is a world-class organization dedicated to the development and delivery of innovative information technology to advance research, education, industry and society. Supported in part by a $15 million grant from the Lilly Endowment, Inc., PTI is built upon a spirit of collaboration and brings together researchers and technologists from a range of disciplines and organizations, including the IU School of Informatics and Computing at IU Bloomington, the IU Maurer School of Law, the IU College of Arts and Sciences and University Information Technology Services, with the administrative leadership of the Office of the Vice President for Information Technology at Indiana University.
Source: Indiana University
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.