IDC: HPC Will Resume Growth After Dipping in 2009

By Nicole Hemsoth

January 29, 2009

‘Tis the season for IDC’s annual HPC market forecast, only this time around it needs to consider a global economic recession. In this exclusive interview, HPCwire quizzes Earl Joseph, IDC’s program vice president for HPC, about what’s in store for 2009.

HPCwire: IDC recently revised its forecast for the HPC market. In a nutshell, what is the new forecast?

Earl Joseph: Based on actual numbers for the first three quarters of 2008 and modeled fourth-quarter numbers, IDC estimates that full-year 2008 HPC server revenue will come in at around $9.6 billion. That’s down 4.2 percent from 2007. Our new forecast predicts HPC server revenue will dip about 5.4 percent in 2009, then start modest growth again in 2010 and rebound to 9 percent-plus growth, eventually reaching $11.7 billion in revenue by 2012.

HPCwire: Why did you revise the forecast?

Joseph: We normally do a new five-year HPC market forecast at this time of year. The third-quarter data on HPC servers that we received from the hardware vendors in late 2008 showed that the global economic recession was already throttling down revenue in HPC. That greatly affected our new forecast.

HPCwire: How did you come up with the revised forecast?

Joseph: Each year we go through a specific, careful process to come up with a new five-year forecast. Our HPC team that includes Jie Wu, Steve Conway, Richard Walsh and myself gets together for two full days to go through this process. We looked carefully at the actual data for the prior five years, especially the most recent quarters. We analyzed IDC’s assumptions and projections for the global IT market and the whole server market. Then we first created a Q4 forecast that also provides a full year 2008 forecast. Then we constructed a table of assumptions about the factors that are most likely to influence the HPC server market in 2009 and beyond. With this foundation, we created our five-year forecast for the HPC server market, the competitive price band segments of the market, and so on.

HPCwire: How confident are you in the forecast?

Joseph: We’re fairly confident about the major assumptions and the general trends in the predictions. We all wish that we had a crystal ball that would tell what is going to happen in the overall world economy. It is always harder to forecast during periods of major ups and downs, and this was one of the hardest times for creating forecasts. Our forecasts are intentionally on the conservative side and in recent years the HPC market has consistently beaten our forecasts.

Another thing that gives us confidence is that in late 2008, when the impact of the economic downturn was already starting to be felt, we conducted an extensive worldwide, in-depth study of 110 HPC sites of all sizes and in all major sectors. These were two to three-hour interviews that produced more data points than what would fit into Excel. The topics ranged from HPC systems to processors, storage, interconnects, system software, budgets, TCO, and application workloads, and we’re churning out separate reports on these topics right now. We asked the sites not just about what they’re doing today, but also about their requirements for the next round of HPC purchasing, including the attributes that would command premium pricing. Not one of the 110 government, industrial, and academic sites planned to reduce HPC use in 2009, although we expect them to be more conservative about new spending. We received a good general sense of where budgets and spending are headed directly from the HPC buyers. That gave us some additional confidence when we put together our forecast.

HPCwire: How does HPC compare with the whole IT market?

Joseph: If our baseline assumptions about the HPC market are right, HPC will come out of the recession the same way it went in, as a bright spot in the IT sector. As I mentioned earlier, we expect HPC to start growing again in 2010 and to be on a robust growth path again by the end of our forecast period in 2012.

HPCwire: What will the main effects of the global economic recession be, where HPC is concerned?

Joseph: We expect users to become more conservative about new spending, but most existing plans will go forward, though there may be delays of a quarter or more in some cases. There will be more focus on cost-effectiveness and this will favor clusters and other standards-based solutions. Competition will heat up for new business, and some weaker vendors may close their doors while stronger ones tighten their belts. More sites will apply simulation and analysis to their existing datacenter designs to grow performance with minimal impact on power, cooling, and facility space. And with tighter controls on capital spending, some increases are expected in CAPEX-free HPC cycles delivered via service-oriented grids, or maybe cloud computing in some cases. This will become more appealing to new users and for periodic, overflow work.

HPCwire: Which HPC segments will be most affected by the global economic recession?

Joseph: Some automotive and financial services firms are so hard pressed that we see them shrinking CAPEX even in mission-critical areas, including HPC. North American automakers will generally take more drastic steps than their Japanese and European counterparts in these reductions.

In sharp contrast, HPC is deeply embedded in the R&D process of oil and gas companies, and most of these companies are in good shape financially even with lower energy prices. We don’t expect much in the way of budget cuts and we see HPC growth plans being carried out, although some purchases may be delayed.

Government and academia together make up over 65 percent of the HPC server market. They’ll probably follow historical patterns and react less quickly and less deeply to the economic downturn than the private sector does.

HPCwire: How much of the HPC market does government spending make up? What impact does IDC expect from the Obama Administration?

Joseph: The U.S. Government is the world’s largest HPC customer, and the new U.S. Administration has said it plans to boost spending on science and technology, so that’s a hopeful sign. HPC should also be critical for the alternative energy research that is one of President Obama’s top priorities. But in the U.S. and around the world, HPC will compete for funding with other urgent priorities and not all new HPC initiatives will get funded, or funded fully, in 2009. And in the U.S., the change of Administration could delay funding for new initiatives and for the expansion of existing HPC-related science and technology programs. We expect that some weapons-based HPC work will likely be redirected and that some procurements may be delayed to free up funding for higher priority projects.

HPCwire: Will IDC revisit this forecast?

Joseph: We plan to fully update the forecast once a quarter for a while, after we receive and analyze the results from vendor quarterly sales. We are hopeful that the HPC market will recover faster than we are currently projecting, but it will depend heavily on how long the economic slowdown lasts.

HPCwire: What other major developments do you expect in the HPC market in 2009?

Joseph: Overall, we expect 2009 to be another year of evolutionary change in the HPC market. Incremental advances will ease the pain of dealing with the massive increase in core counts, but they won’t cure the big issues of highly parallel programming, power and cooling costs, software licensing costs, ease of use, and so on. There will likely be a number of exciting new petascale installations in 2009.

The HPC storage market will stay stronger than the server market through the recession period. We expect that “ease-of-everything” solutions will continue to grow in the low-end workgroup segment and start spreading to higher price point systems. The research we did for the Council on Competitiveness showed that HPC use is already a metric for industrial competitiveness in tier 1 firms. In 2009, we think HPC use in these firms’ supply chains will start to become a competitiveness metric.

Standards-based clusters will gain market share in the price-sensitive economy, but more and more HPC sites will experience retrograde performance on some key codes. In a survey we did in the second half of 2008, half of the HPC sites said that within the next 12 months they expected some of their codes to run more slowly on their newest HPC system than on the previous one. That’s a disturbing new trend that’s being driven by escalating core counts and the inability to move data in and out of each core fast enough to keep the cores busy. It’s exacerbated by energy-saving, tuned-down processor speeds that reduce single-threaded performance. Most HPC sites would rather see faster clock rates.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This