Visit additional Tabor Communication Publications
November 01, 2010
Nov. 1 -- Imagine a tool that is a cross between a powerful electron microscope and the Hubble Space Telescope, allowing scientists from disciplines ranging from medicine and genetics to astrophysics, environmental science, oceanography and bioinformatics to examine and analyze enormous amounts of data from both "little picture" and "big picture" perspectives.
Using a $2.1 million grant from the National Science Foundation, a group led by computer scientist and astrophysicist Alexander Szalay of Johns Hopkins' Institute for Data Intensive Engineering and Science is designing and developing such a tool, dubbed the Data-Scope.
Once built, the Data-Scope, which is actually a cluster of sophisticated computers capable of handling colossal sets of information, will enable the kind of data analysis tasks that simply are not otherwise possible today, according to Szalay, the Alumni Centennial Professor in the Krieger School's Henry A. Rowland Department of Physics and Astronomy.
"Computer science has drastically changed the way we do science and the science that we do, and the Data-Scope is a crucial step in this process," Szalay said. "At this moment, the huge data sets are here, but we lack an integrated software and hardware infrastructure to analyze them. Data-Scope will bridge that gap."
Co-investigators on the Data-Scope project, all from Johns Hopkins, are Kenneth Church, chief scientist for the Human Language Technology Center of Excellence, a Department of Defense-funded center dedicated to advancing technology for the analysis of speech, text and document data; Andreas Terzis, associate professor in the Department of Computer Science at the Whiting School of Engineering; Sarah Wheelan, assistant professor of oncology bioinformatics in the School of Medicine; and Scott Zeger, professor of biostatistics in the Bloomberg School of Public Health and the university's vice provost for research.
Data-Scope will be able to handle five petabytes of data. That's the equivalent of 100 million four-drawer file cabinets filled with text. (Fifty petabytes would equal the entire written work of humankind, from the beginning of history until now, in all languages.)
The new apparatus will allow Szalay and a host of other Johns Hopkins researchers (not to mention those at other institutions, including universities and national laboratories such as Los Alamos in New Mexico and Oak Ridge in Tennessee) to conduct research directly in the database, which is where Szalay contends that more and more science is being done.
"The Data-Scope will allow us to mine out relationships among data that already exist but that we can't yet handle and to sift discoveries from what seems like an overwhelming flow of information," he said. "New discoveries will definitely emerge this way. There are relationships and patterns that we just cannot fathom buried in that onslaught of data. Data-Scope will tease these out."
According to Szalay, there are at least 20 research groups within Johns Hopkins that are grappling with data problems totaling three petabytes. (Three petabytes is equal to about 20 billion photos on Facebook.) Without Data-Scope, "they would have to wait years in order to analyze that amount of data," Szalay said.
The two-year NSF grant, to be supplemented with almost $1 million from Johns Hopkins, will underwrite the design and building of the new instrument and its first year of operation, expected to begin in May 2011. Szalay said that the range of material that the Data-Scope will handle will be "breathtakingly large, from genomics to ocean circulation, turbulence, astrophysics, environmental science, public health and beyond."
"There really is nothing like this at any university right now," Szalay said. "Such systems usually take many years to build up, but we are doing it much more quickly. It's similar to what Google is doing-of course on a thousand-times-larger scale than we are. This instrument will be the best in the academic world, bar none."
Zeger said he is excited about the research possibilities and collaborations that the new instrument will make possible.
"The NSF funding of a high-performance computing system, specially designed by Dr. Szalay and his team to solve large computational problems, will contribute to Johns Hopkins' remaining in the forefront of many areas, including biomedicine, where I work," he said. "The new genomic data are voluminous. Their analysis requires machines faster than are currently available. Dr. Szalay's machine will enable our biomedical and computational scientists to work together to solve problems that would have been beyond them otherwise."
Jonathan Bagger, vice provost for graduate and postdoctoral programs and special projects, said he believes that the Data-Scope positions Johns Hopkins to play a crucial role in the next revolution in science: data analysis.
"The Data-Scope is specially designed to bring large amounts of data literally under the microscope," he said. "By manipulating data in new ways, Johns Hopkins researchers will be able to advance their science in ways never before possible. I am excited that Johns Hopkins is in the forefront of this new field of inquiry: developing the calculus of the 21st century."
The instrument will be part of a new energy-efficient computing center that is being constructed in the basement of the Bloomberg Center for Physics and Astronomy on the Homewood campus. The house-sized room once served as a mission control center for the Far Ultraviolet Spectroscopic Explorer, a NASA satellite. This computing center is being built using a $1.3 million federal stimulus grant from the National Science Foundation.
Source: Johns Hopkins University
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.