Visit additional Tabor Communication Publications
October 26, 2007
Even though the buzz about eScience often focuses on massive hardware, user interfaces, storage capacity and other technical issues, in the end, the ability of eScience to serve the needs of scientific research teams boils down to people: the ability of the builders of the infrastructure to communicate with its users and understand their needs and the realities of their work cultures.
The builders of eScience infrastructure "need to talk about fostering, rather than building infrastructure," said Alex Voss of the National Center for e-Social Science in Manchester, UK, and research theme leader at the e-Science Institute in Edinburgh, UK. There are social aspects to research that must be recognized -- from understanding how research teams work and interact to realizing that research often does not involve the kinds of large, interdisciplinary projects engaged in by virtual organizations, but rather individual work and ad-hoc, flexible forms of collaboration within wider communities.
Voss was one of four panelists who discussed how to reduce the barriers that still inhibit scientists from becoming e-scientists. The discussion was part of the 2007 Microsoft eScience Workshop, hosted by the Renaissance Computing Institute (RENCI), in Chapel Hill, NC, Oct. 21-23. Also offering their thoughts on the barriers to broad eScience adoption were Ian Foster, director of the Computation Institute at the University of Chicago and Argonne National Laboratory, Phil Papadopoulus, director of grid and cluster computing at the San Diego Supercomputer Center, and May Wang of the Emory-Georgia Tech Nanotechnology Center for Personalized and Predictive Oncology. All the panelists agreed that scientific communities must have easy-to-use applications and interfaces and easy access to stored data to become users of eScience grids. And they concurred that both grid researchers and users must work on cross-disciplinary and cross-cultural communications.
"We need applications, that's obvious," said Foster. "But perhaps we need to put more effort into communicating how these applications work. That's probably the single thing we can do that will make the biggest difference: go out and tell the story about successes when applications work, and also tell them when applications don't work, so they can avoid the pitfalls."
Foster noted wryly that in the past, "science advanced one funeral at a time," allowing new ideas to take hold only as those who advocated older paradigms passed away. On a more positive note, he said the ubiquitous connectivity offered by the Internet, the Web and grids "allows us to reach out, to share our interests, make discoveries, and apply new methods more rapidly and effectively than in the past."
Papadopoulus pointed to both technical and social barriers to the adoption of eScience. Although raw storage is cheap, access to data isn't, he said, and the eScience community must address questions about how to access data that is stored remotely, stored offline or behind firewalls. Papadopoulus also challenged infrastructure creators to develop systems that are repeatable. A set of software tools should be transferable to any user's work environment, without the aid of a systems administrator. The steps of a workflow should be repeatable and easy to communicate to another user.
In addition, he noted that the social realities of scientific communities can inhibit the adoption of eScience. Scientists in some domains have only recently started to share their data, a process that is the norm in well-established eScience domains, such as high-energy physics. The grid research community also has its customs that can inhibit broader adoption, according to Papadopoulos.
"Grid research is research, and researchers are rewarded for their research, for coming up with new ideas on how to use network technology and for writing papers, not really for easing the use of software," he said.
Wang, an expert in biocomputing and bioinformatics, speculated on why the biomedical community has been relatively slow to adopt eScience practices. She stressed that eScience tools must be more intuitive for the biomedical community to use them. These researchers -- often doctors with clinical practices -- have little in-depth knowledge of computing and no time to learn it, said Wang. They are problem driven and will turn to eScience only if they see that it will help them address the big questions in medicine. In addition, the medical community would likely feel more at home with eScience if some general computer science were part of their educational curriculum.
"Teaching the basics of computer science, learning some of the computer science languages and how to use computer tools to solve problems would help to overcome some of the barriers," said Wang. "Now, many of our scientists wouldn't even know how to begin a dialogue with a computer scientist. But they can learn by doing if they start at a young age."
More than 260 scientists, industry and university-based grid researchers, faculty and administrators with funding agencies attended the Microsoft eScience Workshop, which was co-chaired by RENCI Director Dan Reed and Microsoft's Vice President of External Research Tony Hey. Participants came from across the U.S., Europe, Canada, South America and Australia.
In the long run, the lasting effects of high-speed networks, data stores, computing systems, sensor networks, and collaborative technologies that make eScience possible will be up to the people who create it and use it, said Reed in his address to attendees.
"The instrumented life -- in which we have biomarkers for disease risks, real-time monitoring of our food intake and exercise routines, analysis of air quality and other environmental factors -- could seem like 1984 rather than 2010," said Reed. "On the other hand, it could have enormous implications for improving our health and our lives. Is it good or bad? Probably a little of both."
The conference wrapped up on Tuesday with a keynote session featuring Hey and David Heckerman, also of Microsoft Research. Heckerman told the audience about research that applies his machine-learning technologies to computational biology and personalized medicine. The work could play a role in developing effective vaccines for HIV and AIDS. Heckerman's statistical models, sometimes called graphical models or Bayesian networks, can also be used for genome-wide association studies -- the search for connections between human DNA and disease.
Hey's talk, called eScience and Digital Scholarship, looked towards tools and technologies required for the whole eScience Data Life Cycle and a coming revolution in scholarly communication. He concluded that the future of eScience will be a mix of software and services "in the cloud."
Microsoft eScience Workshop at RENCI: https://www.mses07.net/main.aspx
Computation Institute: www.ci.uchicago.edu
Emory-Georgia Tech Nanotechnology Center for Personalized and Predictive Oncology: http://www.wcigtccne.org/index.php
e-Science Institute: http://www.esi.ac.uk
National Center for e-Social Science: http://www.ncess.ac.uk
San Diego Supercomputer Center: http://www.sdsc.edu
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.