Visit additional Tabor Communication Publications
December 02, 2005
One of the most powerful computer clusters in the academic world has been created at the California Institute of Technology in order to unlock the mysteries of earthquakes.
The Division of Geological and Planetary Sciences' new Geosciences Computational Facility will feature a 2,048-processor supercomputer, housed in the basement of the Seeley G. Mudd Building of Geophysics and Planetary Science on campus.
Computer hardware fills long rows of black racks in the facility, each contains about 35 compute nodes. Massive air conditioning units line an entire wall of the 20-by-80-foot room to re-circulate and chill the air. Miles of optical-fiber cables tie the processors together into a working cluster that went online in September.
The $5.8 million parallel computing project was made possible by gifts from Dell, Myricom, Intel, and the National Science Foundation.
According to Jeroen Tromp, McMillan Professor of Geophysics and director of the Institute's Seismology Lab, who spearheaded the project, "The other crucial ingredient was Caltech's investment in the infrastructure necessary to house the new machine," he says. Some 500 kilowatts of power and 90 tons of air conditioning are needed to operate and cool the hardware.
David Kewley, the project's systems administrator, explained that's enough kilowatts to power 350 average households.
Tromp's research group will share use of the cluster with other division professors and their research groups, while a job-scheduling system will make sure the facility runs at maximum possible capacity. Tromp, who came to Caltech in 2000 from Harvard, is known as one of the world's leading theoretical seismologists. Until now, he and his Institute colleagues have used a smaller version of the machine, popularly known as a Beowulf cluster. Helping revolutionize the field of earthquake study, Tromp has created 3-D simulations of seismic events. He and former Caltech postdoctoral scholar Dimitri Komatitsch designed a computer model that divides the earth into millions of elements. Each element can be divided into slices that represent the earth's geological features.
In simulations involving tens of millions of operations per second, the seismic waves are propagated from one slice to the next, as they speed up, slow down, and change direction according to the earth's characteristics. The model is analogous to a CAT scan of the earth, allowing scientists to track seismic wave paths. "Much like a medical doctor uses a CAT scan to make an image of the brain, seismologists use earthquake-generated waves to image the earth's interior," Tromp says, adding that the earthquake's location, origin time, and characteristics must also be determined.
Tromp will now be able to deliver better, more accurate models in less time. "We hope to use the new machine to do much more detailed mapping. In addition to improving the resolution of our images of the earth's interior, we will also quantitatively assess the devastating effects associated with earthquakes based upon numerical simulations of strong ground motion generated by hypothetical earthquakes."
"One novel way in which we are planning to use the new machine is for near real-time seismology," Tromp adds. "Every time an earthquake over magnitude 3.5 occurs anywhere in California we will routinely simulate the motions associated with the event. Scientific products that result from these simulations are 'synthetic' seismograms that can be compared to actual seismograms."
The "real" seismograms are recorded by the Southern California Seismic Network (SCSN), operated by the Seismo Lab in conjunction with the U.S. Geological Survey. Of interest to the general public, Tromp expects that the collaboration will produce synthetic ShakeMovies of recent quakes, and synthetic ShakeMaps which can be compared to real ShakeMaps derived from the data. "These products should be available within an hour after the earthquake," he says. The Seismology Lab Media Center will be renovated with a large video wall on which scientists can show the results of simulations and analysis.
The new generation of seismic knowledge may also help scientists, engineers, and others lessen the potentially catastrophic effects of earthquakes.
"Intel is proud to be a sponsor of this premier system for seismic research which will be used by researchers and scientists," said Les Karr, Intel Corporate Business Development Manager. "The project reflects Caltech's growing commitment, in both research and teaching, to a broadening range of problems in computational geoscience. It is also a reflection of the growing use of commercial, commodity computing systems to solve some of the world's toughest problems."
The Dell equipment consists of 1,024 dual Dell PowerEdge 1850 servers that were pre-assembled for easy implementation. Dell Services representatives came to campus to complete the installation.
"CITerra, as this new research tool is known on the TOP500 Supercomputer list, is a proud accomplishment both for Caltech and for Myricom," said Charles Seitz, founder and CEO of Myricom, and a former professor of computer science at Caltech. "The talented technical team of Myricom about half of whom are Caltech alumni/ae, are eager for people to know that the architecture, programming methods, and technology of cluster computing was pioneered at Caltech 20 years ago. Those of us at Myricom who have drawn so much inspiration from our Caltech years are delighted to give some of the results of our efforts back to Caltech."
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.