Visit additional Tabor Communication Publications
September 09, 2008
FRAMINGHAM, Mass., Sept. 9 -- Factory revenue and unit shipments for the HPC technical server market exhibited 10 percent growth in the second quarter (2Q08), according to Jie Wu, IDC research manager for technical computing. Revenues grew 10 percent(1) over the first quarter and 4 percent compared to the same period last year, to reach $2.5 billion in the second quarter. Second-quarter 2008 shipments of server system units in the HPC market totaled 45,000, down 5 percent from the first quarter, while ASPs were up 16 percent compared to 1Q2008 due to an increase in higher-end system sales and softness in lower priced x86 servers. Wu said the second-quarter 2008 HPC server revenue leaders were HP with 37 percent market share, IBM with 27 percent, and Dell with 16 percent.
The HPC portion of the overall server market follows a different pattern due to the nature of spending on R&D projects. This was a strong quarter for the mid and high end of the market as shown by the increase in overall ASP (Average Selling Price). Government and university buyers have longer-term budget cycles that are not immediately impacted by economic slow downs. Economic shifts take over a year to work their way into these budgets. In addition, we are finding that many industrial buyers are still investing in their R&D to try to find ways to be more competitive in the tighter economy. IDC is closely watching the market to see when the broader economic softness impacts HPC spending.
"After more than a year of cross-analysis involving IDC's technical computing and enterprise server teams, we found ways to make OEM reporting of HPC server revenue even more consistent and robust," said Vernon Turner, senior vice president of IDC's Enterprise Infrastructure, Consumer, and Telecom research. "Our enhanced methodology has a major impact on better accounting for HPC server, storage, software and service revenues by more accurately separating server revenue from the other HPC revenue categories."
Turner said IDC has also enhanced its HPC market tracking to include peak performance and price/performance metrics.
The enhancements being incorporated into IDC's HPC tracking methodology include three major improvements:
"These changes reduce the 2007 overall HPC server market revenues on a pro forma adjusted basis to just over $10 billion. IDC projects that the HPC server market will grow at a compounded annual rate (CAGR) of 9.2 percent to reach $15.6 billion in 2012. This rate is in line with IDC projections before the recent methodology enhancements. The continued strong growth of this market is being driven primarily by clusters and newer processor technologies," according to Earl Joseph, IDC program vice president for HPC."
"Powered by their price/performance advantage, clusters now dominate all segments of HPC market. In addition, the HPC market is seeing a shift towards fatter nodes as multicore technology becomes pervasive. This is also driving the requirements for larger and faster memories, along with improved interconnection technologies," Joseph said.
(1) The quarterly comparisons are based on new adjusted figures for the earlier periods (1Q08 and 2Q07). The adjustments reflect the enhancements IDC made to its industry-leading HPC tracking methodology in the second quarter (2Q08).
IDC is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. IDC helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy. More than 900 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 90 countries worldwide. For more than 43 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world's leading technology media, research, and events company. You can learn more about IDC by visiting www.idc.com.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.