Visit additional Tabor Communication Publications
November 16, 2010
Industry standards-based HP ProLiant servers boost research customers to top of worldwide supercomputing list
NEW ORLEANS, Nov. 16 -- HP today announced that HP ProLiant servers have accelerated critical research for three leading scientific institutions, enabling them to satisfy unprecedented computational demands at lower cost and with optimal power efficiency.
Tokyo Institute of Technology (Tokyo Tech), Georgia Institute of Technology (Georgia Tech) and MD Anderson Cancer Center looked to HP for high-performance servers to help speed their scientific and medical achievements while advancing to top spots on the TOP500 list of the world's largest supercomputing installations.
Tokyo Tech needed a supercomputer that could achieve critical performance at maximum power density. To meet its computing needs, HP built its first petascale system, the TSUBAME 2.0, designed to support applications in climate and weather forecasting, tsunami simulations and computational fluid dynamics.
The TSUBAME 2.0 supercomputer took the No. 4 position on the TOP500 list. With power-efficient HP ProLiant servers and the HP Modular Cooling System, TSUBAME 2.0 is one of the most energy-efficient supercomputers in the world.
Georgia Tech's Keeneland System, which supports scientific discovery, claimed the No. 117 position on the list. The project is funded by a five-year, $12 million Track 2D grant awarded by the National Science Foundation.
Using the HP Unified Cluster Portfolio and HP StorageWorks X9000 Network Storage Systems, MD Anderson Cancer Center at the University of Texas placed at No. 169 on the TOP500 list. HP technology enabled the center to leverage its existing storage area network (SAN) infrastructure to create a flexible, virtual resource pool that can be utilized by researchers regardless of their computational needs.
HP leads the market with standards-based server platforms, including the HP BladeSystem c-Class platform and HP ProLiant servers, that deliver supercomputer-class performance in less space with less power. Its systems enable the scientific community to satisfy unprecedented computational demands at lower cost and maximum density.
The recently introduced HP ProLiant SL390s G7 server now powers 11 of the systems on the TOP500 list. The HP BladeSystem c-Class remains the dominant system architecture in the list with 140 entries.
Tokyo Tech's multivendor solution places fourth
The TSUBAME 2.0 supercomputer delivered to Tokyo Tech is the result of a multivendor collaboration among HP, NEC Corporation, Microsoft, NVIDIA, Intel, Mellanox, Voltaire and DataDirect Networks.
With only 200 square meters of physical space and 1.8 megawatts of available power, Tokyo Tech built TSUBAME 2.0 with HP ProLiant SL390s G7 servers to achieve supercomputer-class performance, while meeting limited space requirements with a skinless, ultra-lightweight design that eliminates extraneous hardware.
TSUBAME 2.0 is powered by 1,357 HP ProLiant SL390s G7 servers, each with three NVIDIA Tesla M2050 General Purpose Graphics Processing Units, which deliver an eightfold increase in compute power compared to earlier generations.(1) The system achieved a sustained performance of 1.192 petaFLOPS (floating point operations per second) when running the Linpack benchmark on Linux and 1.12 petaFLOPS on Microsoft Windows.
"We needed to increase performance and efficiency thirtyfold, despite limits on available power and datacenter space," said Satoshi Matsuoka, project lead and professor, Tokyo Tech. "HP's expertise and technology coupled with our team's experience in large-scale deployments, cloud computing and graphics accelerator technology enabled us to push the frontier of energy and space-efficient supercomputing."
Georgia Tech's Keeneland Initial Delivery System edges into TOP500
HP enabled Georgia Institute of Technology and its partners, including Oak Ridge National Laboratory and the University of Tennessee at Knoxville, to develop the Keeneland Initial Delivery System.
Consisting of 120 HP ProLiant SL390s G7 servers and 360 NVIDIA Tesla M2070 accelerators, the Keeneland System delivers performance of more than 64 teraFLOPS on the Linpack benchmarks. This combination of HP ProLiant servers with NVIDIA processors accelerated computational science and data-intensive applications, while addressing new challenges of energy efficiency. On Linpack, this initial Keeneland System rates at 677 megaFLOPS per watt. The final delivery of a much larger Keeneland System is scheduled for early 2012.
MD Anderson Cancer Center places on TOP500 List
MD Anderson Cancer Center turned to HP for high-performance computing that addresses the challenges associated with research, including analyzing, storing and managing the large volumes of data generated by genomics research. Today, MD Anderson runs a Converged Infrastructure that is more reliable and robust as well as easier to maintain.
Upgrading their four-year cluster to a new HP Cluster Platform 4000 with 336 HP ProLiant BL465c G7 server blades, MD Anderson researchers were able to increase computing throughput tenfold with the HP StorageWorks X9000 integrated into the cluster. As a result, researchers now have more compute power to simulate proton radiation therapies, toxicity studies due to radiation treatment, and better understand the genes that are responsible for cancer with the eventual goal of helping provide better treatment for cancer patients. MD Anderson also installed several 32-core HP ProLiant DL785 G6 servers with large memory to quickly pre-process genomic data for computational analysis.
"We needed high-performance clusters that could keep pace with our exploding volumes of data, but that could centralize and pool data resources to make them accessible across all research departments," said Lynn Vogel, Ph.D., vice president and chief information officer, MD Anderson Cancer Center. "HP's state-of-the-art solutions jump-started our research initiatives, permitting us to map genomes and run billion-particle data sets in a matter of hours instead of days. Our researchers have published over 120 papers in the past few years using our cluster as a resource."
About the rankings
The TOP500 ranking of supercomputers is released twice a year by researchers at the Universities of Tennessee and Mannheim, Germany, and at NERSC Lawrence Berkeley National Laboratory. The list ranks supercomputers worldwide based on the Linpack N*N Benchmark, a yardstick of performance that is a reflection of processor speed and scalability.
More information about HP's high-performance computing solutions is available at www.hp.com/go/hpc.
HP (NYSE:HPQ) creates new possibilities for technology to have a meaningful impact on people, businesses, governments and society. The world's largest technology company, HP brings together a portfolio that spans printing, personal computing, software, services and IT infrastructure to solve customer problems. More information about HP is available at http://www.hp.com.
(1) Eightfold increase in performance is based on internal HP testing compared to previous generations.
(2) Calculation based on HP Modular Cooling System cooling up to three times what a standard rack can cool with static air.
Source: Hewlett-Packard Development Company, L.P.
During a conversation this week with Cray CEO, Peter Ungaro, we learned that the company has managed to extend its reach into the enterprise HPC market quite dramatically--at least in supercomputing business terms. With steady growth into these markets, however, the focus on hardware versus the software side of certain problems for such users is....
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.