Visit additional Tabor Communication Publications
June 07, 2012
SUNNYVALE, Calif., June 7 -- IBM again took the top spot in High Performance Computing (HPC) system vendors named by end users worldwide, according to the latest report by HPC industry analyst firm Intersect360 Research. IBM, Dell, HP, SGI, and Cray collectively captured 56% of system vendor mentions, according to the newly released HPC User Site Census: Systems report.
The report, part of Intersect360 Research's HPC Market Advisory service, provides a detailed examination of the computational systems installed at a broad sample of HPC user sites, including analysis of component technologies such as processors and accelerators. Through its partnership with Tabor Communications, Intersect360 Research surveyed the worldwide readership of HPCwire. Future years‘ Site Census surveys will leverage the newly created HPC500 group.
"Our goal in this report was to discover system-level trends within the HPC user communities by examining supplier penetration, architecture trends, and node configurations," said Dr. Christopher G. Willard, Ph.D., Chief Research Officer of Intersect360 Research. "As with previous years, we surveyed a broad range of users about their current computer system installations, storage systems, networks, middleware, and the applications software supporting these installations."
Additional findings of the report include the following:
IBM, followed by Dell, was the top named vendor for number of nodes installed when outliers (i.e., systems with 2,000 or more nodes) were excluded.
Two-processor nodes continue to dominate cluster installations at surveyed sites, with a 60% market share. Four-processor nodes are installed on about 14% of the clusters. Both shares have remained relatively constant over the past five years.
Multi-core processors represent the majority of systems shipped since 2006. For recent installations and upgrades, single-core processor share is now in the very low single digits. Four-core processors hold the greatest share, followed closely by six-core processors.
Memory usage per node and processor are growing at an exponential rate. Memory per core has remained relatively constant over the years. However, the dramatic increase in cores per processor is driving up memory requirements at the node level. This growth in memory requirements risks changing the cost equations for HPC nodes and affecting overall system design.
Companies mentioned in this report include: Ace Computers, Advanced Clustering, Advanced HPC, Amazon, Angstrom, Apple, Appro, Aspen Systems, Atipa, Bull, ClusterVision, Cray, D.E. Shaw Research, Dell, E4 Computer Engineering, Fujitsu , HP, HPC System, IBM, Intel, Isilon, Linux Networx, Megware, Microway Technology, NEC, Netezza, Nvidia, OmniTech, Oracle, Penguin Computing, PSSC Labs, R Associates, Rackspace, SGI, Silicon Mechanics, Supermicro, Tibco, T-Platforms, V3Gaming, GPU- Xpander, VA Linux, and Western Scientific.
An Executive Summary of this report is available for download at www.intersect360.com/industry/reports.
Other reports in this series include: HPC User Site Census: Processors; HPC User Site Census: Applications; HPC User Site Census: Interconnects/ Networks; and HPC User Site Census: Storage.
About Intersect360 Research
Intersect360 Research is a market intelligence, research, and consulting advisory practice focused on suppliers, users, and policymakers across the High Performance Computing ecosystem. Intersect360 Research relies on both user-based and supplier- based research to form a complete perspective of HPC market dynamics, trends, and usage models, including both technical and business applications.
More information on Intersect360 Research can be found at: www.intersect360.com. More information on HPC500 can be found at: www.hpc500.com.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.