Visit additional Tabor Communication Publications
November 24, 2006
During its analyst update breakfast meeting at SC06 last week, IDC unveiled a five-year revenue forecast for the HPC industry, projecting compounded annual growth of about 9 percent to $14.3 billion in 2010, from the 2005 total of $9.2 billion. This compares with industry-wide revenue of $5.9 billion in 2000.
IDC's five-year projection predicts slight growth for the capability segment and continued strong growth for all capacity segments, especially the departmental and workgroup markets. The forecast, delivered by IDC's Addison Snell, included revenue breakdowns by global geographies and application domains.
Snell also reported that HPC market revenue for the first half of 2006 was up 10 percent over the first half of 2005. New HPC users are contributing to the growth.
Leveraged architectures increasingly dominate the HPC market, with more than 95 percent of overall revenue generated by systems based on R&D that was done primarily for non-HPC markets. Linux is also increasingly dominant, representing 65 percent revenue by operating system in the first half of 2006. Linux may be moving more toward proprietary versions, analogous to what happened earlier with UNIX. HP and IBM were the revenue leaders, with Dell in third place. University/academic research was the leading vertical segment for revenue, followed by bio sciences, and government labs.
Key trends include:
Jei Wu discussed IDC's first study on HPC in China. The study looked at industrial end-users. Key findings include:
The Chinese industrial end-users considered foreign HPC offerings superior in quality but higher in cost than domestic products. The number of vendors and options make purchasing a complex, lengthy process; and it's difficult to balance hardware and applications considerations. Similar to the rest of the world, the main purchasing criteria are price, performance, service and the price/performance ratio.
China is one of the fastest-growing markets for overall IT and for HPC. Vendors should understand the market before entering it, and should have long-term strategies. Additional IDC studies in China are under way.
Jei Wu also discussed grids in technical computing, based on IDC studies of the cluster and grid market, and of entry-level HPC users. IDC defines a grid as a set of independent computers that are combined into a unified system through software and networking. Grids are virtual systems.
The IDC study of entry-level HPC users found that 45 percent of these users employ grids today. Only 5 percent plan to purchase utility computing.
Vendors with grid offerings need to help minimize the cultural changes that are slowing grid adoption today, push for more industry-wide standards and for better security.
Chris Willard gave an overview on HPC technical clusters. He noted that cluster revenue has doubled in the past three years, providing essentially all of the growth in the HPC market during this period. Clusters can now perform many types of jobs, and users are looking for premium features. Node-level performance is key. Most clusters are departmental systems. Systems in the capability segment tend to be MPPs, and in the workgroup segment they tend to be SMPs.
Willard presented highlights of IDC's January 2006 cluster end-user study, which found that the three primary buying criteria are price/performance, system throughput, and total cost of ownership. The top challenges are facilities issues (e.g., power and cooling), and system management capability. To expand on this, about equal numbers of the users buy new clusters or add more nodes. The mean number of nodes was 180, of CPUs/core 360, and of sockets 256. Nearly half of all sites (47 percent) used in-house codes, while 45 percent used third-party codes and 10 percent used open source codes.
IDC predicts that technical cluster revenue will grow 16.1 percent to reach $9 billion in 2010. According to Willard, blades are gaining momentum but will move slowly into the market. Storage revenue is growing at a 50 percent-plus annual rate.
Users will pay a premium for superior system designs, integration, ease of use and support. Vendors should not abandon technical excellence. Price/performance will no longer be enough for winning.
Earl Joseph discussed IDC research directions, which includes evaluating petascale options, ISV/middleware scaling issues, storage and data management, processor options and issues with multi-core, and clusters and grids. IDC will ramp up research on China, India and will begin country-level data tracking. IDC is especially interested in end-user success stories and best practices; new partnerships; entry-level users and what limits their HPC usage; and alternative business models, especially for providing R&D funding for custom systems.
Through meetings, studies and papers, IDC's HPC User Forum will begin exploring why the world needs capability computing and why this segment is shrinking. Joseph said IDC welcomes input for developing a taxonomy for capability computing. Drivers for supercomputing include economic factors, advanced innovation and insight, safety and security, and others areas.
Joseph invited the HPC community to participate in upcoming HPC User Forum meetings in India (February 28-March 2, 2007) and Coeur d'Alene, Idaho (April 9-11, 2007), as well as the fall meetings in Santa Fe, New Mexico and in Germany. IDC will hold the first HPC User Forum meeting in China in 2008.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.