April 1, 2010

The Week in Review

by Tiffany Trader

Here is a collection of highlights from this week’s news stream as reported by HPCwire.

Jülich Supercomputer Simulates Quantum Computer

Bull Extends and Updates bullx Family of Supercomputers

Solarflare Launches Family of 10 Gigabit Ethernet Products

New Intel Xeon Processor Pushes Mission Critical into the Mainstream

Dell Unveils New ‘Powerhouse’ PowerEdge Servers

ScaleMP Raises the Bar for Java Application Performance

Skoda Auto Selects SGI Altix ICE to Accelerate Automotive Innovation

LHC Research Program Gets Underway

Solarflare, Solace Demonstrate Market Data Performance Metrics

AMD Announces Opteron 6000 Series Platform

NASA Extends Contract for Supercomputing Support Services

New Argonne National Labs Supercomputing Center Designed with BIM Technology

Scientists Discover World’s Smallest Superconductor

Allinea Announces DDT for the CUDA Architecture

Barcelona Supercomputing Center Supports Numerous Projects

Cray Wins Again

Today, three announcements came out all related to Cray having won a contract to provide the National Nuclear Security Administration (NNSA) with a new supercomputer, named Cielo. The primary parties are, of course, Cray and the NNSA, with Panasas filling in the storage details. The supercomputer will be used for heavy-duty modeling and simulation with the goal of ensuring the safety, security and effectiveness of the United States’ nuclear stockpile.

The multi-year, multi-phase contract, which Cray valued at more than $45M, has the option for a future upgrade if the NNSA so chooses. Interestingly enough, in a separate announcement put out by the NNSA, the project was valued at under $54M, so let’s assume it will be somewhere between the $45M and $54M mark.

Cielo, a capability-class platform for the Advanced Simulation and Computing program at the NNSA, will support all three of the NNSA national laboratories, which include Los Alamos National Laboratory, Sandia National Laboratories and Lawrence Livermore National Laboratory. The supercomputer will be based on Cray’s ”Baker” architecture, which builds on the Cray XT system architecture, and features a new interconnect chipset known as “Gemini” and enhanced system software. Cielo has been designed to support large single jobs capable of utilizing the entire platform.

Cielo will be housed at the Strategic Computing Complex facility at Los Alamos National Laboratory and is expected to be installed in the second half of 2010.

This win by Cray comes on the heels of another big procurement deal announced in February, when the Department of Defense (DOD) selected Cray for all three high performance computing system awards as part of the DOD’s 2010 High Performance Computing Modernization Program (HPCMP). The contract, also worth more than $45 million, was the largest DOD HPCMP system award to a single vendor. That deal also involved the Baker architecture and all three systems are also due to be delivered in the second half of the year. Cray has their work cut out for them.

New Weather Forecasting Model Runs 80x Faster on GPUs

The Tokyo Institute of Technology in partnership with NVIDIA has announced a next-generation weather forecasting model, codenamed ASUCA, that uses graphics processing units (GPUs) to greatly reduce the time it takes to run weather simulations. ASUCA takes only 70 minutes to simulate a six-hour meteorological event by using the processing power of NVIDIA Tesla GPUs and the CUDA parallel processing architecture. Previously, the calculation would have taken 5,600 minutes using CPUs. This represents an 80-fold acceleration.

This is not the first time GPUs have been used to improve modeling speeds. The National Center for Atmospheric Research in Boulder, Colo., has been using GPUs in its Weather Research and Forecasting Model (WRF), the most widely-used model in the world. With partial GPU utilization, they have achieved a 20 percent increase in processing speed. But the Tokyo Tech researchers have achieved full GPU utilization, a feat that has been virtually impossible until now.

From NVIDIA’s nTersect blog:

Now, we’ve reached another milestone for the GPU in the field of atmospheric modeling. A research group led by Professor Takayuki Aoki of the Tokyo Institute of Technology has succeeded in 100 percent utilization of GPUs in the next-generation weather forecasting model, codenamed ASUCA, currently being developed by the Japan Meteorological Agency. ASUCA has a similar feature set to WRF, but because it is fully GPU-optimized, ASUCA runs 80 times faster than weather models running on CPUs alone or on CPU/GPU combinations. In short, it is the fastest solution available today.

With the new forecasting model, extreme weather events, such as hurricanes and tsunamis, can be predicted faster and more precisely, leading to greater preparedness and life-saving measures. Faster and more accurate weather forecasting also leads to improved airlines operations, and better agricultural outcomes.

The original work is in Japanese, but if you follow the comments section on NVIDIA’s blog, there may be some translations coming down the pike.

Share This