Visit additional Tabor Communication Publications
June 30, 2010
NATIONAL CAPITAL REGION, June 30 -- With the latest release of The Green500 List, accelerator-based supercomputers now occupy the top eight slots of the Green500, where the 'fuel efficiency' (or energy efficiency) of supercomputers is defined as 'millions of floating-point operations per second' (MFLOPS) divided by 'watts' (W) or MFLOPS/W. Accelerators refer to the use of dedicated hardware to perform computations faster than a traditional processor, also known as a central processing unit (CPU).
Green500 co-founder Wu Feng, associate professor of computer science and electrical & computer engineering at the College of Engineering at Virginia Tech explained the significance of the 'fuel efficiency' of these accelerator-based supercomputers. "The accelerator-based supercomputers on The Green500 List produce an average efficiency of 554 MFLOPS/W whereas the other measured supercomputers on the list produce an average efficiency of 181 MFLOPS/W. That makes the accelerator-based supercomputers on the Green500 more than three times more energy efficient than their non-accelerated counterparts on the list" Feng said.
The accelerator-based supercomputers come in two flavors: one is based on the custom PowerXCell 8i processor from IBM; and the second is based on the commodity graphics processing unit (GPU) from one of two companies, either Advanced Micro Devices' (AMD) ATI technology or from NVIDIA. As in the previous edition of the list from November 2009, the former flavor tops the Green500 with three IBM quantum chromodynamics parallel computing on the cell (QPACE) machines, all tying for first place and all located in Germany at the University of Wuppertal, University of Regensburg, and Jülich Research Center, respectively. The low-power QPACE clusters use the IBM PowerXCell 8i processor, an enhancement of the Cell Broadband Engine originally developed by Sony, Toshiba, and IBM for Sony's PlayStation 3, as well as a network of programmable units called field programmable gate arrays or FPGAs.
In contrast, China has taken a notably different approach by leveraging the commodity graphics processing unit (GPU) as an accelerator. "Their first GPU-based supercomputer, Tianhe-1, debuted on the Green500's November 2009 list at #8 and used AMD/ATI Radeon HD 4870 GPUs. It has since dropped to #11. Two new GPU-based supercomputers, Dawning TC3600 and Mole-8.5, both from China, leapfrog Tianhe-1 and use NVIDIA C2050 GPUs to debut at #4 and #8, respectively," Feng said. A GPU is traditionally used to accelerate the rendering of two-dimensional (2-D) or three-dimensional (3-D) graphics on a display monitor, possibly a computer laptop or a desktop display. For the GPU supercomputers from China, however, the GPUs are "re-purposed" to perform general-purpose computation on the GPU, i.e., GPGPU.
The Green500's exploratory Little Green500 List debuted in November 2009 with the intent to further raise awareness by driving energy efficiency as a first-order design constraint on par with performance, or more specifically, speed. For the Little Green500 List, the average efficiency is 199 MFLOPS/W, which is 18 MFLOPS/W higher than the average efficiency of The Green500 List. This efficiency improvement, however, comes at the expense of lower overall performance. "This exploratory list seeks to be more inclusive of the high-performance computing community by being more open in its definition of a supercomputer. The Little Green500 List ranks the energy efficiency of a larger set of supercomputers, where a supercomputer is defined as being as fast as the 500th-ranked supercomputer on the TOP500 List 18 months prior," Feng said.
The Green500's exploratory HPCC Green500 List, where HPCC stands for the High-Performance Challenge Benchmarks, was announced in November 2009 and welcomes its first official entry, the Talon supercomputer, from Mississippi State University. "In addition to its listing in the HPCC Green500 List, Talon is currently ranked #9 on the Green500 and #331 on the TOP500.
Source: Virginia Tech
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.