Visit additional Tabor Communication Publications
December 08, 2006
A team of scientists and engineers from Carnegie Mellon University, the University of Texas, the University of California, Davis, and the Pittsburgh Supercomputing Center (PSC) won the Analytics Challenge Award at SC06. The award was presented on Nov. 16 in Tampa, site of SC06 -- the international conference of high-performance computing, networking, data storage and analysis - after a presentation by team leader Tiankai Tu of Carnegie Mellon.
The goal of the team's work is to realistically simulate earthquake ground motion and thereby better assess the seismic hazard to populated earthquake basins. Their award-winning project combines powerful simulation and visualization methods, using PSC's Cray XT3 (BigBen) -- a lead computing resource of the National Science Foundation TeraGrid -- with PSC-developed software that enables real-time visualization. Their coordinated end-to-end approach, which they call Hercules, provides a new capability for scientists and engineers to gather insight from earthquake simulations that use hundreds or thousands of processors simultaneously.
They first applied Hercules in August by simulating the 1994 Northridge earthquake with 1,024 processors of PSC's Cray XT3. Real-time visualization allowed the researchers to view difficult-to-observe physical phenomena. "We were able to see strong concentrations of seismic energy in both the San Fernando Valley and the Los Angeles Basin, while seismic waves in the nearby Santa Monica Mountains and San Gabriel Mountains had dissipated -- a validation that sedimentary basins trap seismic energy during strong earthquakes," said Carnegie Mellon civil and computational engineer Jacobo Bielak, one of the leaders of the Quake Group.
"The stunning real-time visualization is made possible by a new computational technique called end-to-end simulation, where mesh generation, partitioning, solving, visualization and data analysis are performed in place and in parallel on the nodes of a supercomputer," said David O'Hallaron, associate professor of computer science and electrical and computer engineering at Carnegie Mellon, co-leader of the Quake Group.
Hercules relies on software called PDIO (Portals Direct I/O), developed by PSC staff, that supports run-time remote interaction with a parallel program on the Cray XT3. PDIO routes data between the Hercules simulation and a remote laptop/desktop running QuakeShow, a visualization program that makes it possible to change view angles, zoom in or out and other operations -- while the simulation is running.
Inaugurated in 2005, in response to the need for sophisticated analysis and visualization methods to contend with huge amounts of scientific data from large-scale parallel computation, the Analytics Challenge honors advanced techniques for solving complex, real-world problems. The initial Analytics Challenge award also went to a project that used the TeraGrid and relied heavily on PSC resources, the SPICE project (Simulated Pore Interactive Computing Environment), led by theoretical chemist Peter Coveney, University College London.
The Quake Group's award-winning project was officially titled "Remote Runtime Steering of Integrated Terascale Simulation and Visualization." The full team comprises Hongfeng Yu, University of California, Davis (technical lead); Tu, Carnegie Mellon (team lead); Bielak, Carnegie Mellon; Omar Ghattas, University of Texas at Austin; Julio C. Lopez, Carnegie Mellon; Kwan-Liu Ma, University of California, Davis; O'Hallaron, Carnegie Mellon; Leonardo Ramirez-Guzman, Carnegie Mellon; Nathan Stone, PSC; Ricardo Taborda-Rios, Carnegie Mellon; and John Urbanic, PSC.
For more information visit http://www.psc.edu/science/2006/inprogress/#Hercules.
Source: Pittsburgh Supercomputing Center
During a conversation this week with Cray CEO, Peter Ungaro, we learned that the company has managed to extend its reach into the enterprise HPC market quite dramatically--at least in supercomputing business terms. With steady growth into these markets, however, the focus on hardware versus the software side of certain problems for such users is....
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.