Visit additional Tabor Communication Publications
June 03, 2010
Returning to ISC after a hiatus of several years and viewing the event from the vantage point of an industry analyst, the show appears to have made a quantum leap in terms of size and sophistication of the exhibit, and degree and intensity of business activity. The exhibit hall while not on the scale of the SC event included all the major suppliers to the industry, a large array of smaller middle-tier suppliers, plus strong entries from the user community. Clearly, suppliers attended to do business and not simply "show the flag" and generally reported good results.
The event also struck me as being more European than in past years. There are several possible factors here: increased interest in HPC within western Europe, expansion of eastern European economies and computing efforts, reduced travel budgets in the US, and so on.
One trend that was apparent at the ISC exhibition was the growing role of technology providers in the industry. Companies such as AMD, Intel, Mellanox, LSI, Supermicro and STEC were prominent at the show, and from this observer's perspective doing good trade.
Intel's New Processors
Intel's announced its plans for a multicore coprocessor family: a current software development version codenamed Knights Ferry and a future production version codenamed Knights Corner based on 22 nm process technology. This line of multicore processor boards is a direct challenge to NVIDIA's CUDA-based dominance of the HPC accelerator market. The primary keys to success/failure for this effort are first, Intel's ability to deliver its software development environment for the Knights family, which the company indicates will be compatible with its current tools suite. And second, NVIDIA's ability to capture as much market and mind share for its GPU-based Tesla and Fermi product line before Intel can begin to deliver production-level products. This should provide NVIDIA with about a two-year head start to meet its most significant competitive challenge to date. In the meantime Intel will need to propagate Knights Ferry programs as widely as possible, and work to convince the market the wait for Knights Corner will be worth it.
On the TOP500 List
I have a love/hate relationship with the TOP500 list. On the love side, the computer science behind the list is excellent. The folks who maintain the list do a great job of explaining what the data means and how it is relevant (although few people seem to listen). The list provides good data on high-end system technologies and historic trends, and, like a top 25 college sports team list, it is fun to track, especially if your college is on the list.
On the hate side is that outside of the HPC technical community, the computer science part of the TOP500 is largely ignored, reducing one of the most complex technologies and markets in the world to a few dozen statistics. This is like sending your child to medical school based on which institution has the highest-rated basketball team, or like assuming that one can understand chemistry by examining the bottom rows of the periodic table. (But I digress...)
That said, Cray with its long history in the TOP500 race, seemed philosophical about being upstaged by the number 2 system from Dawning. There is little surprise that China with its program to develop its technical infrastructure appeared well up on the list. Perhaps more important than gaining the number 2 slot was China's placing a second system in the top 10 -- an NUDT system in the 7th slot, which is indicative of a general effort to provide computing capabilities to a broad group of scientists and engineers within the country. The odds makers are favoring China to gain the top slot on the list by the next publishing cycle or two.
It sometimes seems that entry into the TOP500 list is a right of passage for companies, institutions and countries.
The major winners from this year's TOP500 competition may be the exascale system advocates. In addition to capturing headlines, China's success will certainly lead to political concerns, which may mean additional funding for next generation supercomputing efforts will open up. I am a great believer in government support for IT technology and infrastructure in general, and HPC in particular. If the TOP500 list can be used to lever more resources into HPC programs, it may ultimately be a good thing. However, there is a real risk that such programs will be overly focused on a single requirement while shorting requirements in other areas (e.g., applications software development, education, support for industrial users particularly at the midrange and low end, and so on). I sometimes get the feeling that organizations are striving to produce the most powerful locomotives in the world without training engineers, building the passenger and freight cars, train stations and assembly yards, or even adequate rail lines.
Posted by Chris Willard - June 03, 2010 @ 5:27 PM, Pacific Daylight Time
Christopher Willard, PH.D. is Chief Research Officer for Intersect360 Research
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.