Visit additional Tabor Communication Publications
July 17, 2008
BLACKSBURG, Va., July 17 -- The Green500 List debuted in November 2007 and ushered in a new era of energy-efficient supercomputing. The Green500 List is intended to serve as a ranking of the most energy-efficient supercomputers in the world and as a complementary view to the Top500 List.
The second edition was released in February 2008. And now, the third edition of the Green500 List arrives on the heels of the recent International Supercomputing Conference, arguably one of the "greenest" conferences to date.
Wu Feng, a member of both the computer science and the electrical and computer engineering departments in Virginia Tech's College of Engineering and founder of the Green500, said there were several "notable highlights from this edition of the list."
First, the first sustained petaflop supercomputer, Roadrunner developed by the U.S. Department of Energy Los Alamos National Laboratory, exhibits extraordinary energy efficiency. Roadrunner, the top-ranked supercomputer in the TOP500, is ranked third on the Green500.
"This achievement provides evidence that energy efficiency is becoming as important as raw performance for modern supercomputers and that energy efficiency and performance can co-exist. For comparison, the last two supercomputers to top the TOP500 are now No. 43 and No. 499 on the Green500," Feng explained.
"Los Alamos National Laboratory recognized the performance opportunities of Cell, and accelerators in general, early on. That's what made a petaflop possible. IBM is very energy conscious, and their design of the QS22 is the reason that three QS22-based systems, including our own Roadrunner supercomputer, are at the top of The Green500 List," said Andy White, deputy associate director at Los Alamos National Laboratory.
"The Roadrunner supercomputer is akin to having the fastest Formula One race car in the world but with the fuel efficiency of a Toyota Prius," Feng added.
Second, nearly one in every three supercomputers on the Green500 List now achieves more than 100 megaflops/watt (where megaflops stands for millions of floating-point operations per second), whereas in the previous edition of the Green500, only one in every seven supercomputers did. On a related note, the top-ranked Green500 supercomputer improved by 131 megaflops/watt since November 2007 whereas the bottom-ranked Green500 supercomputer only improved by 0.39 megaflops/watt for a difference of three orders of magnitude in energy efficiency.
Third, exactly three supercomputers surpassed the 400 megaflops/watt milestone for the first time. All three machines are based on IBM's BladeCenter QS22 chassis with the Cell processor, the processor that also serves as the basis for the Sony PlayStation 3.
Based on feedback from e-mails to the Green500 and from a recent Birds-of-a-Feather session at the International Supercomputing Conference, the Green500 will evolve to be more inclusive for all high-end computing stakeholders. Additional developments will be posted on the Green500 Web site.
Feng said, "The organizers of the Green500 welcome further analysis of the data for additional takeaways and further encourage raising awareness in energy-efficient or green supercomputing. We also encourage fair use of the list rankings to promote energy efficiency in high-end computing systems and discourage use of the list to disparage."
This story can be found on the Virginia Tech News Web site: http://www.vtnews.vt.edu/story.php?relyear=2008&itemno=457.
Source: Christina Daniilidi, Virginia Tech
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.