Visit additional Tabor Communication Publications
August 11, 2009
BERKELEY, Calif., Aug. 11 -- As scientists in a wide variety of disciplines increasingly rely on supercomputers and collaboration with colleagues around the world to advance their research, managing and sharing the mountain of data generated by their investigations will soon become a choking point. In order to facilitate such data-intensive research, ESnet, the Department of Energy's high-performance networking facility managed by the Lawrence Berkeley National Laboratory, is receiving $62 million to develop what will be the world's fastest computer network, designed specifically to support science.
Funded by the American Recovery and Reinvestment Act, the Advanced Networking Initiative will ensure that the United States stays competitive in science and technology. Specifically, ESnet will develop a prototype 100 gigbits per second (Gbps) Ethernet network to connect DOE supercomputer centers at speeds 10 times faster than current technology.
"This network will serve as a pilot for a future network-wide deployment of 100 Gbps Ethernet in research and commercial networks and represents a major step toward DOE's vision of a 1-terabit -- 1,000 times faster than 1 gigabit -- network interconnecting DOE Office of Science supercomputer centers," said Michael Strayer, head of DOE's Office of Advanced Scientific Computing Research.
"ESnet has always been a service organization," said Steve Cotter, ESnet Department Head at Berkeley Lab. "We exist to enable DOE scientists to do great work at the cutting edge, and to increase the scientific capabilities of the United States. The deployment of a next-generation 100 Gbps network will ensure that we continue to provide state-of-the-art services to our constituents and continue to enable scientific discovery."
At a time when economic conditions are forcing private companies to cut back on investment in research and development, ESnet will be working with telecommunications companies and hardware vendors in bringing to market the latest networking technologies and deploying them in this pre-standards, prototype network.
As planned, some of the $62 million for this Initiative will be used both to create new jobs for network and software engineers at Berkeley Lab, but the bulk of the funding will be used for purchasing networking equipment or services from service providers who have the infrastructure to support the new 100 Gbps technology. In all, up to $59 million will be invested directly in the telecommunications industry in the United States.
In addition to the direct economic benefits of the project, there are induced ones as well. Several studies have shown that network investments provide both immediate and long-term benefits, ranging from higher wages to driving productivity and economic growth. By developing 100 Gbps now, more universities and companies will find 10 Gbps and 1 Gbps networks far more affordable.
A Science-Driven Need for More Bandwidth
DOE scientists are now generating data at the terabyte scale, and datasets will soon be in the petabyte range, or 1,000 terabytes. Moving this much data will require both greater bandwidth and reliability, as well as new protocols to enable these high-speed transfers.
The study of global climate change is a critical research area where the amount of data being created and accessed is growing exponentially. For example, an archive of past, present and future climate modeling data maintained by the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory contains more than 35 terabytes of data and is accessed by more than 2,500 users worldwide. However, the next-generation archive is expected to contain at least 650 terabytes, and the larger distributed worldwide archive will be between 6 petabytes to 10 petabytes.
Another scientific driver for increased bandwidth is the Large Hadron Collider in Switzerland. Within this accelerator -- the world's largest -- millions of protons racing at near the speed of light will collide every second; scientists suspect the outcome of these "subatomic smashups" will provide valuable insights into the origins of matter and dark energy in the Universe. The experiments will generate more data than the international scientific community has ever tried to manage -- up to 100 gigabits per second, to be processed and analyzed by scientists around the globe.
National-scale Test Bed
As part of the Advanced Network Initiative's approximately $59 million investment in new networking equipment and services, about $8 million to $9 million will go towards a national-scale network test bed for use by the research community and industry to test out new technologies, protocols and applications.
The test bed will consist of advanced network devices and components assembled to give network and middleware researchers the capabilities to prototype ESnet capabilities anticipated in the next decade. As host of the test bed, ESnet will develop strategies to move mature technologies from testing mode to production service.
ESnet, formally known as the Energy Sciences Network, is already one of the world's most advanced research networks, with tools to allow scientists to reserve network capacity in advance and thus guaranteeing bandwidth and service at specified times. Its current network, ESnet4, received a 2009 Excellence.gov award for innovative use of technology from the Industry Advisory Council.
ESnet is primarily funded by DOE's Office of Science, one of the nation's largest supporters of scientific research. Managed and operated by the ESnet staff at Berkeley Lab, ESnet provides high-bandwidth network connections to more than 40 sites conducting DOE-funded research, including some 20 large-scale experimental facilities and large supercomputing centers used by thousands of DOE scientists generating massive amounts of data. One goal of the project is to provide a 100 Gbps link between DOE's largest unclassified supercomputing centers in California, Illinois, and Tennessee.
Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the DOE Office of Science. Visit our Web site at http://www.lbl.gov.
Source: Lawrence Berkeley National Laboratory
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.