Visit additional Tabor Communication Publications
June 19, 2008
The TeraGrid, the National Science Foundation's evolving program of cyberinfrastructure for U.S. science and education, held its third annual conference June 9-13 in Las Vegas. Observing three years of TeraGrid full-production operation, TG08 opened with a presentation from Dan Reed, one of the people most instrumental in TeraGrid's 2001 genesis as NSF's flagship cyberinfrastructure.
After founding and directing the Renaissance Computing Institute (RENCI) at the University of North Carolina, Reed moved to Microsoft, where he is Scalable and Multicore Computing Strategist. Before RENCI, he was director of NCSA in Illinois. There, building on the PACI Alliance notion of a distributed grid of shared resources, he helped to develop the TeraGrid vision. The idea, Reed reminded his audience of about 350 researchers, educators and TeraGrid staff, was "to begin escaping the tyranny of data captured at single supercomputing sites."
After looking back to TeraGrid's origins, Reed focused on the future. "What can we learn from the TeraGrid experience, technically and politically? Where is the technology going and what are the research implications?" He referred to a recent special issue of Nature that explores the state of science in 2020, noting that science in the 21st century is inextricable from computing.
Quoting from the study, "From sequencing genomes to monitoring the Earth's climate, many recent scientific advances would not have been possible without a parallel increase in computing power -- and with revolutionary technologies such as the quantum computer edging towards reality, what will the relationship between computing and science bring us over the next 15 years?"
As befitted the Las Vegas setting, Reed asked his audience to ponder risk versus reward. "What probability of successful return would you accept to be the first human to set foot on Mars?" Twenty years ago, he noted, grids were research curiosities and a terabyte was many disks of data. "The future depends on vision and context."
The context has radically changed from not that long ago, he noted, in that bulk computing has become almost free relative to software and power. "Nowadays you can buy a lot of computing on your credit card. We still don't have terabit transcontinental networks for research use; moving lots of data is still hard. The big cost is people. The cost of a professional software developer for a year is now more than a teraflop computing cluster."
Today's context, says Reed, is a Five-Fold Way comprising 1) many-core on-chip parallelism, 2) big "really big" datacenters, 3) web services, 4) ubiquitous sensors (producing huge data volumes), and 5) "clouds" as an evolving model of computational service. Today, further increases in computer performance require embracing multicore parallelism; hardware progress has outstripped progress in software to exploit it.
An important goal, Reed emphasized, is context-aware information. Referring to Vannevar Bush's vision of a national research enterprise, which led eventually to the National Science Foundation, Reed called for services, including datacenters, and the concept of cloud computing that has the ability to put the right information in the right heads at the right time.
Data models, noted Reed, are in rapid flux because of larger and larger data volumes. This is especially pronounced in some fields, such as biomedical research, where large databases are subject to distributed analysis. A big challenge, probably underappreciated, says Reed, is the scale of the data deluge. "We will be running queries on 100,000 servers," said Reed. "And research is moving from being hypothesis driven ("I have an idea, let me verify it.") to exploratory ("What correlations can I glean from everyone's data?"). This kind of exploratory analysis will rely on tools for deep data-mining." Massive, multi-disciplinary data, said Reed, is rising rapidly and at unprecedented scale.
In discussing next-generation applications and cyberinfrastructure investment, Reed noted that the historical model of "punctuated competitions" is not optimal in that it tends to stress a culture of competition among research centers over long-term collaboration. Research and infrastructure, he noted, mix badly since "it takes a long time to identify appropriate practices and software." Sustainability really matters because software and organizations take time.
Grids and clouds, says Reed, will tend to fuse with time. The rapid growth in the size and capability of commercial computing clouds, as exemplified by work underway at Microsoft, is driven by economics. Reliable, centrally hosted infrastructure provides commercially-based services. While grids are more tailored for academic agendas, economic factors will tend to bring these two related service models together in a fusion that is more than the sum of its parts.
Returning to the ratio of risk and reward as he concluded, Reed stressed the need to ask big questions. In Reed's view, there are basically three: 1) biology, understanding of life and nature, 2) the universe, how matter came to be and cosmic structure, and 3) the human condition, where biology and the universe intersect in the sphere of human creativity and social life. Answering the big questions requires boldness and interdisciplinary partnerships. With the three-fold way of science -- theory, simulation and experiment -- now proven, says Reed, "Great things are ahead. We are positioned to do amazing things."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.