Visit additional Tabor Communication Publications
November 18, 2008
HP BladeSystem dominates list as customer demand for energy-efficient, standards-based solutions catapult it over proprietary IBM offerings
PALO ALTO, Calif., Nov. 17 -- SC08 -- For the second consecutive year, the powerful and energy-efficient HP BladeSystem c-Class server has dominated the TOP500 list of the world's largest supercomputing installations by delivering a flexible architecture that provides customers with measurable cost, space and energy savings.
Including systems built on HP ProLiant architectures, HP now commands a total of 41.8 percent of systems on the TOP500 list, while IBM slipped to 37.6 percent.
HP BladeSystem powers 40.2 percent of the systems on the most recently announced list; this represents more blade installations than all other vendors combined. Versatile, energy-efficient and affordable, HP blade servers provide customers with the maximum density required for high-performance and scale-out computing.
With 201 placements, the number of HP BladeSystem servers on the TOP500 list has increased by 5 percentage points compared to the June 2008 ranking and by 10 percentage points compared to June 2007. The number of high-performance computing (HPC) installations using blade servers on the TOP500 list has increased more than any other single computing architecture. In fact, blade-powered systems are increasingly replacing proprietary systems in the HPC area and legacy mainframe architectures in commercial environments.
"Customers can maximize their high-performance computing investments while increasing energy efficiency with blades, clearly improving their bottom line," said Christine Martino, vice president and general manager of Scalable Computing and Infrastructure organization at HP. "The continued dominance of HP BladeSystem customers on the TOP500 list demonstrates the growing market demand for industry-standard architectures that address a broader set of computing challenges at a far lower cost than proprietary systems and mainframes."
Emphasizing the strong momentum of HP blade technology in the market, the HP ProLiant BL2x220c G5 powers several of the most power efficient industry-standard supercomputing clusters, including WETA Digital Ltd. in New Zealand, Cyfronet in Poland and Columbia University in New York City. The BL2x220C G5 delivers up to 260 megaflops-per-watt ratio running the TOP 500 Linpack Benchmark across a single, 32-node enclosure.
This performance benefit, coupled with double the performance per rack, positions the HP BL2x220c as the leading server blade for customers that need maximum application performance without the additional infrastructure costs.
According to IDC's worldwide HPC server Qview report, HP is the leading provider of HPC servers with 37 percent of the overall market based on revenue in the second quarter of 2008.
"Over the last several years, we've seen an explosive growth of blade servers for a widening range of high-performance computing applications -- from digital media creation and online gaming to more traditional HPC applications such as computer-aided design," said Earl Joseph, program vice president of High-performance Computing at IDC Research. "Previously, customers' only choice for HPC was a high-end, multi-million dollar supercomputer. Now, blades offer a highly flexible, scalable, lower-budget alternative to the proprietary systems that historically dominated the TOP500 list."
Top-ranking HP customers
Having recently doubled the size of its supercomputing cluster configuration, Academy Award-winning animation company WETA Digital is now ranked 101-104 on the TOP500 list. The new system consists of four supercomputing clusters and is powered by 1,280 HP BL2x220c server blades, which provide 205 teraflops accumulated peak performance.
As a result of the increased application performance and improvements in energy efficiency, WETA Digital has the processing density to produce cutting-edge digital animation faster, while still lowering operations overhead.
"In the world of visual effects, finding technology that is faster and energy-efficient is one of the most influential components to maintaining a competitive edge in this crowded marketplace," said Paul Ryan, chief technology officer at WETA Digital.
"HP's new BL2x220c has enabled us to double our processing capacity in the existing physical datacenter space. As a result, we've been able to increase capacity without building out our datacenter or experiencing additional power consumption costs associated with cooling hundreds of blades," added Adam Shand, systems team lead, WETA Digital.
Also making a mark on the TOP500 list is India's Centre for Development of Advanced Computing (C-DAC). C-DACs "PARAM Cluster" is ranked 69 on the list, with a system powered by 288 HP ProLiant DL580 G5 servers that offer a peak performance of 54 teraflops per second.
About the rankings
The TOP500 ranking of supercomputers is released twice a year by researchers at the Universities of Tennessee and Mannheim, Germany, and at NERSC Lawrence Berkeley National Laboratory. The list ranks supercomputers worldwide based on the Linpack N*N Benchmark, a yardstick of performance that is a reflection of processor speed and scalability.
More information about HP HPC is available at www.hp.com/go/hpc.
Visit HP in booth 1518 at the SC08 supercomputing tradeshow in Austin, Texas, Nov. 17-20, for demonstrations of the company's HPC offerings.
HP (NYSE:HPQ), the world's largest technology company, provides printing and personal computing products and IT services, software and solutions that simplify the technology experience for consumers and businesses. HP completed its acquisition of EDS on Aug. 26, 2008. More information about HP is available at http://www.hp.com/.
Source: Hewlett-Packard Development Company, L.P.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.