Visit additional Tabor Communication Publications
November 09, 2010
Nov. 9 -- NASA will showcase the latest achievements in climate simulation, space exploration, aeronautics engineering, science research and supercomputing technology at the 23rd annual Supercomputing 2010 (SC10) meeting.
The leading international conference on high-performance computing, networking, storage and analysis will be held Nov. 13-19, 2010, at the Ernest N. Morial Convention Center in New Orleans.
NASA's SC10 exhibit will feature nearly 50 demonstrations including high-resolution simulations of Hurricane Katrina that give new insight into tropical storm formation and development. The simulations potentially could save lives and reduce property damage. Scientists also will present modeling and simulation projects to predict and analyze potential and actual sources of debris that pose risk to remaining space shuttle missions during launch and in orbit; design and develop next-generation heavy-lift and multipurpose crew vehicles for future exploration of space; and help reduce aircraft landing-gear noise, a major source of noise pollution near metropolitan airports.
"Our advanced modeling and simulation tools and expertise are integral to scientific and engineering advancements throughout NASA," said Rupak Biswas, chief of the NASA Advanced Supercomputing (NAS) Division at NASA's Ames Research Center in Moffett Field, Calif. "Combined with the power of supercomputers, massive data storage, high-speed networks, computer science expertise and visualization technologies, these numerical computations are critical to agency work ranging from designing more efficient rotorcraft, to advancing our understanding of global climate change, to designing and analyzing new space crew modules, just to name a few."
The high-end computing operations at both the NAS facility at Ames and the NASA Center for Climate Simulation (NCCS) at the agency's Goddard Space Flight Center in Greenbelt, Md., have undergone significant expansions to handle the ever-increasing need for computational resources, particularly for Earth science research.
This year, the NAS facility completed a series of extensions to NASA's largest supercomputer, Pleiades. The agency increased the system to 84,992 cores, achieving a peak performance of over one petaflop, the ability to do more than one quadrillion floating point operations per second.
Pleiades is one of the most cost-effective supercomputers in the world. The recent expansion, in part, supports the NASA Earth Exchange, a new collaboration platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing.
In October 2010, NCCS doubled the capacity of its Discover supercomputer. The new cluster provides a scalable system with significantly reduced floor space and highly efficient power and cooling. Discover's combined 29,368 cores yield a peak performance of more than 320 teraflops.
"Discover already has begun hosting climate simulation runs for the next Intergovernmental Panel on Climate Change Assessment Report that will go back a full millennium and forward to 2100," said Phil Webster, NCCS project manager and chief of the Computational and Information Sciences and Technology Office at Goddard. "With our newest processors, NASA scientists plan to perform global weather and climate simulations at resolutions approaching one kilometer, which is the fidelity of many satellite observations."
Demonstrations in NASA's exhibit (booth # 3839) represent work by researchers at Ames, Goddard, NASA's Glenn Research Center in Cleveland; NASA's Langley Research Center in Hampton, Va.; and NASA's Jet Propulsion Laboratory in Pasadena, Calif., in addition to NASA's various university and corporate partners.
For more information about the NASA's exhibit at the SC10 meeting, visit http://www.nas.nasa.gov/SC10.
For information about NASA's High-End Computing Program, visit http://www.hec.nasa.gov.
For information about the SC10 meeting, visit http://sc10.supercomputing.org.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.