Visit additional Tabor Communication Publications
November 11, 2009
AUSTIN, Texas, Nov. 11 -- The Ranger supercomputer, one of the most powerful systems in the world for open science research, has run about 1.1 million jobs in under two years.
When it entered full production on Feb. 4, 2008, this first-of-its-kind system marked the beginning of the Petascale Era in high-performance computing (HPC) where systems now approach a thousand trillion operations per second and manage a thousand trillion bytes of data.
"Ranger has already enabled hundreds of research projects and thousands of users to do very large-scale computational science in diverse domains," said Jay Boisseau, director of the Texas Advanced Computing Center (TACC). "We're very proud of the tremendous impact it has had on open science, and the impact is growing as it matures and more researcher applications are optimized to use its tremendous capabilities."
Bill Barth, director of TACC's HPC group, said, "The demand for time on Ranger has been very high and instrumental to making TeraGrid the nation's largest resource for open science computational research. The system has run more than 600 million central processing unit hours so far."
As for the user who ran the millionth job, Barth said it was a small post-processing job (16 processors) completed by Dr. Yonghui Weng, research associate, in Professor Fuqing Zhang's hurricane research group at the Pennsylvania State University Department of Meteorology.
"Researchers need to perform a variety of tasks on Ranger and they are all important to the research process," Barth said. " In addition, we have different types of researchers -- ones who are interested in running large single-simulation problems, and ones who are interested in running thousands or millions of really small problems. Our job is to support science at whatever scale."
Weng's research explores the potential of on-demand HPC to support hurricane forecast operations and to evaluate high-resolution ensembles to achieve Hurricane Forecast Improvement Program (HFIP) goals for the development and implementation of the next-generation hurricane forecast system.
Weng said he has been using Ranger consistently since July 2008 to produce improvements in hurricane forecast accuracy. Zhang's hurricane research group at Pennsylvania State is sponsored by grants from the National Science Foundation, Office of Naval Research and the National Oceanic and Atmospheric Administration HFIP project.
"During the hurricane season from July to October, I run an operational hurricane ensemble data assimilation system twice per day, and my team runs an operational deterministic forecast system at the same frequency," Weng said. "In addition to the operational jobs during hurricane season, we use Ranger for sensitivity experiments, model development, and exploration of dynamics and predictability of hurricanes."
To illustrate the variety of ways one researcher can use a system like Ranger, Weng said he ran a cloud-scale ensemble analysis and prediction experiment that used 23,808 processors, and a deterministic forecast job that used 8,192 processors in real-time during Hurricane Ike.
"The system is wonderful and I'm impressed with the TACC support staff which make our jobs run so efficiently," Weng said.
"During the first several months of large-scale system deployment, every tweak is important," Barth said. "As time goes on the system settles out and begins to operate as a well-oiled machine. It's still many people's full-time jobs to keep Ranger running, but at the same time we can start to think about deploying new systems."
The Ranger supercomputer is funded through the National Science Foundation (NSF) Office of Cyberinfrastructure "Path to Petascale" program. The system is a collaboration among the Texas Advanced Computing Center (TACC), The University of Texas at Austin's Institute for Computational Engineering and Science (ICES), Sun Microsystems, Advanced Micro Devices, Arizona State University, and Cornell University. The Ranger supercomputer is a key system of the NSF TeraGrid, a nationwide network of academic HPC centers, sponsored by the NSF Office of Cyberinfrastructure, which provides scientists and researchers access to large-scale computing, networking, data-analysis and visualization resources and expertise.
Source: Texas Advanced Computing Center
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.