Visit additional Tabor Communication Publications
June 21, 2011
After a day of flying from California to Hamburg and working off the jet lag, there is nothing more exciting then waking up the first day at ISC for the 7:30 AM breakfast meeting to go over the annual IDC industry and market share figures for HPC. Seriously, I couldn’t wait!
A few dear friends of mine (130 of them!) and I joined Dr. Earl Joseph and Steve Conway of IDC to hear their take on the state of the HPC market. And since ISC is held annually in Europe, there was a special report, commissioned by the EU, on the state of HPC in Europe. This part was a bit gloomy, but more on that later.
This truly is the way to kick off a major global HPC industry event. Before we emissaries from the world of HPC dive into the inner workings of high performance computing and related technologies, we should most definitely get a perspective of how we’re doing as an industry and review the major trends in HPC from the consummate industry data source – IDC.
Earl and Steve did an outstanding job of presenting real, quantitative figures and market circumstances in impressive detail. Much more information than I could possibly cover in this short blog post. Therefore, at the end of this piece you will find a link where you can request a copy of the full PowerPoint deck from their talk. It has it all – overall growth figures, figures for just servers, figures for just supercomputers, market share by industry, market share by geography, factors driving buying decisions, pain points for adoption, the GPGPU trend, petaflops, exascale, and more.
HPC Vendor Revenue Share, 2010
As a slight teaser, here are some of the highlights;
The broader HPC market is nearly $19B.
The Top Trends in HPC
Why HPC Is Projected To Grow
IDC’s Top 10 HPC Predictions for 2011
Finally, as promised, a quick note about Europe. IDC did a special study assessing the primary vision for the EU’s HPC leadership. It recommended that the EU and the nations make HPC a higher priority and step up to either the “full leadership level" or at least the “funding to reach major goals level" level. Europe needs to invest in and support a robust HPC industry with hardware, software, etc. To accomplish this they would require a net new investment reaching 600 million Euros a year (approx $860M) within five years. An investment of this magnitude indicates low levels of HPC capability today.
If you find any of this of interest, I strongly suggest you download the PowerPoint deck. To receive the presentation, you'll be asked to fill out a quick survey followed by an instant download.
This is another way in which we will continue to bring value to the extended community with news and information relevant to the world of high performance computing. Enjoy!
Posted by Tom Tabor - June 21, 2011 @ 12:51 PM, Pacific Daylight Time
Tom is the publisher of HPC in the Cloud. He has over 30 years of experience in business-to-business publishing, with the last 22 years focused primarily on High Productivity Computing (HPC) technologies.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.