Visit additional Tabor Communication Publications
January 29, 2009
Well, the boom times for high performance computing couldn't last forever. The global recession has reached all the way into the HPC market. According to new reports released this month from analyst firms IDC and Tabor Research, HPC server revenue contracted in 2008, and 2009 doesn't look any better.
In this week's issue we get the scoop from the two premier HPC analyst groups: IDC and Tabor Research. Earl Joseph, IDC's program VP for HPC, discusses their revised HPC server numbers and how it's effecting their five-year forecast. Addison Snell, GM for Tabor Research, talks about their new outlook and the market drivers behind it.
Despite rosier forecasts just a few months ago, the new data suggest 2008 suffered server revenue declines in the sector. IDC estimates HPC server sales were $9.6 billion in 2008, representing a reduction of 4.2 percent compared to 2007 revenue. Tabor Research's number for 2008 is $7.83 billion, which is down a more modest 0.8 percent from its prior year figure.
At this point IDC is forecasting negative server growth for 2009 as well. Beyond that it sees modest growth in 2010, returning to its "normal" 9 percent-plus growth trajectory by 2011. Those three lost years mean IDC's five-year forecast has been scaled back significantly. Just four months ago, the analyst group was predicting the HPC server market to hit $15.6 billion in 2012. Because of the economic slowdown, that number has been pared to $11.7 billion.
Tabor Research's Snell points out that servers are actually the weakest segment of HPC spending, since software, storage, network equipment and facilities costs represent a much larger slice of the overall budget. He says "only about a third of an HPC user's budget goes to servers, and this percentage is falling." That's actually a good thing. The closer you get to selling just commodities (like x86 servers), the more susceptible you become to boom-bust cycles.
So will some HPC vendors be able to dodge the recession? The latest quarterly results from interconnect vendor Mellanox Technologies suggest some HPC companies may be better positioned to ride out the bad economy. Its Q4 results were nothing to write home about, but as a whole, the company grew its revenue by 28 percent in 2008 and is seeing a rapid uptake of its new 40 Gbps InfiniBand products.
But HPC overall is dipping. There's fairly general consensus that the recession is causing users of all stripes to become more conservative with new IT spending, lengthening procurement cycles. IDC's Joseph notes that the automotive and financial services sectors are particularly stressed right now, to the point that even mission-critical capital expenditures are being slashed. Snell points out that university endowment funds are suffering right now, which is likely to negatively impact HPC spending.
Public spending, at least at the federal level, is likely to help buoy the market as governments try to pump life into the economy. Both IDC and Tabor point to the Obama administration's plans to increase science and infrastructure funding as a possible source of new HPC revenue. The stimulus bill that will get this party started is still making its way through Congress, but it's almost assured that we'll see some big chunk of public money headed for R&D and technology-related spending over the next two years.
At this point, it's a real struggle for analysts to forecast too far into the future. For now, HPC is at the mercy of an economy that is careening from crisis to crisis and volatility is the only constant. IDC says it intends to update its numbers on a quarterly basis to keep its pulse on industry. And I'm sure we'll be hearing more from Tabor Research as it collects new data in the months ahead.
Posted by Michael Feldman - January 29, 2009 @ 4:51 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.