Visit additional Tabor Communication Publications
January 29, 2009
Well, the boom times for high performance computing couldn't last forever. The global recession has reached all the way into the HPC market. According to new reports released this month from analyst firms IDC and Tabor Research, HPC server revenue contracted in 2008, and 2009 doesn't look any better.
In this week's issue we get the scoop from the two premier HPC analyst groups: IDC and Tabor Research. Earl Joseph, IDC's program VP for HPC, discusses their revised HPC server numbers and how it's effecting their five-year forecast. Addison Snell, GM for Tabor Research, talks about their new outlook and the market drivers behind it.
Despite rosier forecasts just a few months ago, the new data suggest 2008 suffered server revenue declines in the sector. IDC estimates HPC server sales were $9.6 billion in 2008, representing a reduction of 4.2 percent compared to 2007 revenue. Tabor Research's number for 2008 is $7.83 billion, which is down a more modest 0.8 percent from its prior year figure.
At this point IDC is forecasting negative server growth for 2009 as well. Beyond that it sees modest growth in 2010, returning to its "normal" 9 percent-plus growth trajectory by 2011. Those three lost years mean IDC's five-year forecast has been scaled back significantly. Just four months ago, the analyst group was predicting the HPC server market to hit $15.6 billion in 2012. Because of the economic slowdown, that number has been pared to $11.7 billion.
Tabor Research's Snell points out that servers are actually the weakest segment of HPC spending, since software, storage, network equipment and facilities costs represent a much larger slice of the overall budget. He says "only about a third of an HPC user's budget goes to servers, and this percentage is falling." That's actually a good thing. The closer you get to selling just commodities (like x86 servers), the more susceptible you become to boom-bust cycles.
So will some HPC vendors be able to dodge the recession? The latest quarterly results from interconnect vendor Mellanox Technologies suggest some HPC companies may be better positioned to ride out the bad economy. Its Q4 results were nothing to write home about, but as a whole, the company grew its revenue by 28 percent in 2008 and is seeing a rapid uptake of its new 40 Gbps InfiniBand products.
But HPC overall is dipping. There's fairly general consensus that the recession is causing users of all stripes to become more conservative with new IT spending, lengthening procurement cycles. IDC's Joseph notes that the automotive and financial services sectors are particularly stressed right now, to the point that even mission-critical capital expenditures are being slashed. Snell points out that university endowment funds are suffering right now, which is likely to negatively impact HPC spending.
Public spending, at least at the federal level, is likely to help buoy the market as governments try to pump life into the economy. Both IDC and Tabor point to the Obama administration's plans to increase science and infrastructure funding as a possible source of new HPC revenue. The stimulus bill that will get this party started is still making its way through Congress, but it's almost assured that we'll see some big chunk of public money headed for R&D and technology-related spending over the next two years.
At this point, it's a real struggle for analysts to forecast too far into the future. For now, HPC is at the mercy of an economy that is careening from crisis to crisis and volatility is the only constant. IDC says it intends to update its numbers on a quarterly basis to keep its pulse on industry. And I'm sure we'll be hearing more from Tabor Research as it collects new data in the months ahead.
Posted by Michael Feldman - January 29, 2009 @ 4:51 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.