Visit additional Tabor Communication Publications
August 10, 2011
“There are many different silos of information that have been painstakingly collected; and there are a number of existing tools that bring some strands of data into relation. But there is no overarching tool that can be used across silos.”
The sentiments behind this quote could apply to a wide range of scientific disciplines, not to mention to enterprises that have collected vast amounts of data but are still piecing together the puzzle of how to integrate and make sense of it.
In fact, the above quote came from quantitative biologist, Michael Schatz, as he reflected on the need for massive data integration for scientists worldwide—and the computational models needed to produce connected information sets.
Schatz is one of several biologists involved in Systems Biology Knowledgebase, also known as Kbase. This DOE project was started in 2008 to make data more accessible and integrated for biological researchers. Just last year the research and development required to design and implement the Kbase effort was completed by the Genomic Science program—but there is still plenty of work ahead.
As Ariella Brown noted, “Kbase should be a boon both for those who want to gain better understanding of such life forms for the sake of pure science and to those who would apply the Kbase data, metadata, and tools for modeling and predictive technologies to help the production of renewable biofuels and a reduction of carbon in the environment.”
Brown goes on to describe the Kbase program and its goals:
“The plan is for Kbase to start off with seven data centers on ESnet (the Department of Energy Energy Sciences Network). That is one for each of the six defined scientific objectives of Kbase; the seventh is devoted to coordinating the infrastructure development of the project. According to the current timetable, it should take 12 months to get the Kbase hardware platform operational. Version 1.0 is anticipated to be accessible after 18 months and version 2.0 after 36 months; five years is the estimated time to achieve operation and support at target levels.
The idea is to implement a system that can grow as needed and be easily used by scientists without extensive training in applications. It should produce understandable results based on clear scientific assumptions, engage all members of the scientific community, and encourage further discovery, with findings that inspire “new rounds of experiments or lines of research.”
While Kbase is an ongoing project, the model for its integration and collaboration developments will extend to other disciplines, allowing greater, more open access to scientific data across the world. As the graphic below shows, the need for such integration is clear—but it is a slow climb to full data integration, sharing and use for biology researchers.
Image Source: Genomic Science Program, US Dept. of Energy
Full story at Internet Evolution
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.