Visit additional Tabor Communication Publications
November 24, 2006
During its analyst update breakfast meeting at SC06 last week, IDC unveiled a five-year revenue forecast for the HPC industry, projecting compounded annual growth of about 9 percent to $14.3 billion in 2010, from the 2005 total of $9.2 billion. This compares with industry-wide revenue of $5.9 billion in 2000.
IDC's five-year projection predicts slight growth for the capability segment and continued strong growth for all capacity segments, especially the departmental and workgroup markets. The forecast, delivered by IDC's Addison Snell, included revenue breakdowns by global geographies and application domains.
Snell also reported that HPC market revenue for the first half of 2006 was up 10 percent over the first half of 2005. New HPC users are contributing to the growth.
Leveraged architectures increasingly dominate the HPC market, with more than 95 percent of overall revenue generated by systems based on R&D that was done primarily for non-HPC markets. Linux is also increasingly dominant, representing 65 percent revenue by operating system in the first half of 2006. Linux may be moving more toward proprietary versions, analogous to what happened earlier with UNIX. HP and IBM were the revenue leaders, with Dell in third place. University/academic research was the leading vertical segment for revenue, followed by bio sciences, and government labs.
Key trends include:
Jei Wu discussed IDC's first study on HPC in China. The study looked at industrial end-users. Key findings include:
The Chinese industrial end-users considered foreign HPC offerings superior in quality but higher in cost than domestic products. The number of vendors and options make purchasing a complex, lengthy process; and it's difficult to balance hardware and applications considerations. Similar to the rest of the world, the main purchasing criteria are price, performance, service and the price/performance ratio.
China is one of the fastest-growing markets for overall IT and for HPC. Vendors should understand the market before entering it, and should have long-term strategies. Additional IDC studies in China are under way.
Jei Wu also discussed grids in technical computing, based on IDC studies of the cluster and grid market, and of entry-level HPC users. IDC defines a grid as a set of independent computers that are combined into a unified system through software and networking. Grids are virtual systems.
The IDC study of entry-level HPC users found that 45 percent of these users employ grids today. Only 5 percent plan to purchase utility computing.
Vendors with grid offerings need to help minimize the cultural changes that are slowing grid adoption today, push for more industry-wide standards and for better security.
Chris Willard gave an overview on HPC technical clusters. He noted that cluster revenue has doubled in the past three years, providing essentially all of the growth in the HPC market during this period. Clusters can now perform many types of jobs, and users are looking for premium features. Node-level performance is key. Most clusters are departmental systems. Systems in the capability segment tend to be MPPs, and in the workgroup segment they tend to be SMPs.
Willard presented highlights of IDC's January 2006 cluster end-user study, which found that the three primary buying criteria are price/performance, system throughput, and total cost of ownership. The top challenges are facilities issues (e.g., power and cooling), and system management capability. To expand on this, about equal numbers of the users buy new clusters or add more nodes. The mean number of nodes was 180, of CPUs/core 360, and of sockets 256. Nearly half of all sites (47 percent) used in-house codes, while 45 percent used third-party codes and 10 percent used open source codes.
IDC predicts that technical cluster revenue will grow 16.1 percent to reach $9 billion in 2010. According to Willard, blades are gaining momentum but will move slowly into the market. Storage revenue is growing at a 50 percent-plus annual rate.
Users will pay a premium for superior system designs, integration, ease of use and support. Vendors should not abandon technical excellence. Price/performance will no longer be enough for winning.
Earl Joseph discussed IDC research directions, which includes evaluating petascale options, ISV/middleware scaling issues, storage and data management, processor options and issues with multi-core, and clusters and grids. IDC will ramp up research on China, India and will begin country-level data tracking. IDC is especially interested in end-user success stories and best practices; new partnerships; entry-level users and what limits their HPC usage; and alternative business models, especially for providing R&D funding for custom systems.
Through meetings, studies and papers, IDC's HPC User Forum will begin exploring why the world needs capability computing and why this segment is shrinking. Joseph said IDC welcomes input for developing a taxonomy for capability computing. Drivers for supercomputing include economic factors, advanced innovation and insight, safety and security, and others areas.
Joseph invited the HPC community to participate in upcoming HPC User Forum meetings in India (February 28-March 2, 2007) and Coeur d'Alene, Idaho (April 9-11, 2007), as well as the fall meetings in Santa Fe, New Mexico and in Germany. IDC will hold the first HPC User Forum meeting in China in 2008.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.