Visit additional Tabor Communication Publications
August 18, 2006
Combining computer and communications skills, experts at the University of California San Diego are helping colleagues at the California Institute of Technology share the massive amounts of data produced by astronomers' investigations of the cosmos.
For the past three years, astronomers at the California Institute of Technology's Palomar Observatory in Southern California have been using the High Performance Wireless Research and Education Network (HPWREN) as the data transfer cyberinfrastructure to further our understanding of the universe.
HPWREN is staffed by researchers at UC San Diego's San Diego Supercomputer Center (SDSC), Scripps Institution of Oceanography (SIO), and San Diego State University (SDSU).
Recent applications include the study of some of the most cataclysmic explosions in the universe, the hunt for extrasolar planets, and the discovery of our solar system's tenth planet. The data for all this research is transferred via HPWREN from the remote mountain observatory to college campuses hundreds of miles away.
Funded by the National Science Foundation, HPWREN provides Palomar Observatory with a high-speed network connection that helps enable new ways of undertaking astronomy research consistent with the data demands of today's scientists. Specifically, the HPWREN bandwidth allows astronomers to transfer a 100 MB image from a telescope camera at Palomar to their campus laboratories in less than 30 seconds.
"The Palomar Observatory is by far our most bandwidth-demanding partner," says Hans-Werner Braun, HPWREN principal investigator, a research scientist with the San Diego Supercomputer Center at UC San Diego. "Palomar is able to run the 45 megabits-per-second HPWREN backbone flat out and will be able to utilize substantially more bandwidth in the future. The current plan is to upgrade critical links that support the observatory to 155 Mbps and create a redundant 45 Mbps path for a combined 200 megabits-per-second access speed at the observatory."
Last summer astronomers making use of the Palomar 48-inch Samuel Oschin Telescope announced the discovery of what some are calling our solar system's tenth planet. The object has been confirmed to be larger than Pluto. The telescope uses a 161-million-pixel camera -- one of the largest and most capable in the world. HPWREN enables a large volume of data to be moved off the mountain to each astronomer's home base. Modern digital technology with pipeline processing of the data produced enables astronomers to detect very faint moving and transient objects.
To find these objects, the telescope takes a relatively short exposure of a section of the sky. It then goes off and images a pre-arranged sequence of such target fields. After a period of time it comes back and repeats the sequence. Then it does it again after another interval. Any objects that are visible in all three images, but move consistently with respect to the background star field, are solar system objects such as asteroids, comets or Kuiper Belt objects. Because of the large amount of data, pipeline processing is used both to detect such objects and to calculate their preliminary orbits from the initial triplet data. Sedna and the tenth planet, 2003UB313, were found using this technique, as were a large number of Near Earth Asteroids, by the Jet Propulsion Laboratory's Near-Earth Asteroid Tracking (NEAT) program.
The Nearby Supernova Factory piggybacks their hunt for a certain type of exploding star, known as Type Ia supernovae, with the data collected by the NEAT program, and they then use the observations of these supernovae as "standard candles" for measuring the accelerating expansion of the universe. To date the survey has discovered about 350 supernovae, including 90 Type Ia supernovae.
Greg Aldering of the University of California's Lawrence Berkeley Laboratory says "The recent discovery that the expansion of the universe is speeding up has turned the fields of cosmology and fundamental physics on their heads. The QUEST camera and the speedy HPWREN link are giving us an unprecedented sample of supernovae for pursuing this exciting discovery. The Palomar supernovae will be compared with supernovae from the Hubble Space Telescope and other telescopes to try to determine what is causing this acceleration."
One of the universe's most mysterious and explosive events is the phenomenon known as a gamma-ray burst (GRB). They are briefly bright enough to be visible billions of light years away, but they are difficult to study because they are very short lived and take place at seemingly random locations and times. Astronomers rely on satellites like Swift which detects a GRB and immediately relays the information to observers worldwide via the Gamma-Ray Burst Coordinates Network. If a gamma-ray burst occurs when it is dark and clear at Palomar, the observatory's robotic 60-inch telescope immediately slews to the coordinates provided and images the fading optical glow of the explosion.
"The rapid response by the Palomar 60-inch telescope is possible only because of HPWREN. With it we have observed and categorized some of the most distant and energetic explosions in the universe," remarks Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories. These observations have allowed astronomers to reach new frontiers by classifying the bursts and theorizing about their origins.
For the last decade astronomers have been using indirect methods and giant telescopes (like the Keck in Hawaii) to make their first discoveries of planets outside our solar system (called exoplanets). The smallest telescope at the Palomar Observatory is performing its own search for exoplanets. With a small telescope it is possible to detect a giant Jupiter-sized world that lies close to its parent star. By looking at a great many stars each night the HPWREN-powered Sleuth Telescope hopes to catch such a planet in the act of passing directly in front of its star. Such an eclipse, known as a transit, dims the light of the star by about one percent.
Sleuth is an automated telescope, capable of observing target areas of the night sky without much human interaction. All the required actions are scripted in advance, and a computer running this script is placed in charge of the telescope. The observer can then get a good night's sleep and receive the data in the morning. The automated nature of this procedure allows for remote observing, so the observer need not even be on the mountain.
"Living in the modern age of astronomy has made observing much more efficient. Every night we transfer about 4 gigabytes of data via HPWREN from Sleuth to Caltech in Pasadena. It is on my computer and analyzed before I arrive at work in the morning," says Caltech graduate student Francis O'Donovan. "The ability to process the previous night's data enables us to quickly check the quality of that data. We can then ensure the telescope is operational before beginning the next night's observations."
"The current HPWREN funding supports research into an understanding, prioritization, and policy-based re-routing of data network traffic, something the bursty and predominantly night-time, high-volume observatory traffic is very useful for," explains Braun. "This being alongside other research and education traffic, also including continuous low-volume sensor data with tight real-time requirements, creates an ideal testbed for this network research as well."
The High Performance Wireless Research and Education Network program is an interdisciplinary and multi-institutional UC San Diego research program led by principal investigator Hans-Werner Braun at the San Diego Supercomputer Center and co-principal investigator Frank Vernon at the Scripps Institution of Oceanography. HPWREN is based on work funded by the National Science Foundation. The HPWREN web site is at http://hpwren.ucsd.edu/.
More information on the tenth planet and how it was found can be seen at: http://www.gps.caltech.edu/~mbrown/planetlila/index.html and http://www.astro.caltech.edu/palomar/survey.html
For more on Palomar's gamma-ray burst research, go to: http://www.astro.caltech.edu/palomar/grb.html and http://www.astro.caltech.edu/palomar/exhibits/grb/
For more information on Sleuth: http://www.astro.caltech.edu/~ftod/tres/sleuth.html
Source: Paul K. Mueller, UCSD; Caltech
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.