Visit additional Tabor Communication Publications
November 02, 2007
Last Thursday, NEC announced its sixth generation vector supercomputer, the SX-9, which the company is touting as the "worlds fastest vector supercomputer." The company says the new machine will be twice as energy-efficient as the SX-8R generation. The SX-9 is based on a new 100 gigaflop vector processor, sixteen of which are placed in a node. In addition to the new vector processor, the SX-9 supports up to one terabyte of shared memory per node and an internode interconnect of up to 128 GB/second. At its maximum configuration of 512 nodes, the SX-9 would deliver a peak vector performance of 839 teraflops.
Before I go any further, I should point out that to the best of my knowledge, no such machine is being built -- or ever will be. According to Thomas Schoenemeyer, HPC Presales Manager, NEC GmbH, nothing near the size of an 839 teraflop system is in the pipeline. NEC has orders for two systems in Europe. One is headed to the German Weather Service (DWD); the other to Meteo France. The German system, which will deliver 39 teraflops, and, coincidentally, costs 39 million euros (72 million dollars), is scheduled to be fully operational in 2010. The Meteo France system is also expected to be a sub-100 teraflop machine. This week, the company also announced an order from Japan's Tohoku University for a 26 teraflop system. NEC plans to ship bigger SX-9 systems down the road, but they don't expect to be challenging petaflop supercomputers in the foreseeable future.
"We are not going to be on the top of the TOP500 list with this system," admits Schoenemeyer. "Our focus is the productivity of the customer."
An 839 teraflop SX-9 would probably cost in the neighborhood of a billion dollars. So despite what you might have read elsewhere, the top systems from Cray and IBM are unlikely to be challenged by a maxed out SX-9 machine anytime soon. The last NEC machine to achieve TOP500 notoriety was the 36 teraflop Earth Simulator, a SX-6 generation system that was ranked the most powerful machine in the world from 2002 to 2004, before IBM Blue Gene/L overtook it.
Like its forebearers, the SX-9 is targeted to weather forecasting service facilities, climate research centers, and other government science centers. NEC has sold over 1000 SX systems over the past two decades -- the vast majority in Japan and Europe, although there are some outliers in Australia, South Africa, and Brazil. There are virtually none in North America.
The way NEC is happily churning out vector supercomputers, one might get the impression that weather and climate modeling is a growth industry. While global warming is certainly a big topic these days, such research is unlikely to propel SX-9 production into the double-digit growth rates enjoyed by the overall HPC market.
But unlike in North America, Japan and Europe have a decent-sized installed base of vector machines and the vast majority of them are NEC supers. Although most of the 1000-plus NEC vector machines sold over the last two decades have been retired, a lot of Japanese and European Earth science centers still run on SX systems. NEC is hoping many of these organizations will upgrade to the SX-9 at some point and keep the legacy going.
SX-8 applications are upwardly compatible with SX-9 (binary compatible), so the software upgrade path should be painless. NEC maintains its own compiler for the vector processors, as well as a Super-UX Unix OS to enable applications to fully utilize the large flat memory architecture and powerful processors. Both OpenMP and MPI parallelism are supported. It's this kind of end-to-end support that has allowed NEC to maintain, and even grow, its customer base for more than two decades.
In the recent past, Cray has had some success with its X1 and X1E vector machines (Warsaw University, Spain's National Institute of Meteorology, Korea Meteorological Administration). But today the company is penetrating the European market with its Opteron-based XT4 systems. Cray's future strategy for its vector computing offerings will become more apparent next week.
Dedicated vector machines used to be all the rage in supercomputing, starting with the first commercial system in 1974, the CDC STAR-100. Cray soon followed with the Cray-1 in 1976. Later, NEC, Fujitsu and Hitachi each developed their own architectures. But vector supercomputing is a tough sell these days. The market share of these types of machines has been declining for some time, replaced by more general-purpose systems -- both tightly coupled supercomputers and computer clusters -- based on superscalar CPUs.
While HPC applications that make heavy use of a lot of matrix arithmetic, like computational fluid dynamic (CFD) codes, are well-suited to vector processors, in practice, multicore superscalar chips have proved to be a better overall technology. This is mainly because as HPC applications evolve, they become more complex, employing a greater variety of algorithms to get their job done. This complexity manifests itself in diverse computing requirements; some parts of the code require high levels of single-threaded performance, other parts require a lot of threads, and still others benefit from lots of data parallelism. Systems based on scalar processors tend to be very good at the first two, and pretty good at the third one. Vector-based machines are really only good at data parallelism (and actually only a subset of that). Even weather modeling applications, the vector machine's raison d'etre, require scalar processing for optimal performance.
More commodity-based vector processing solutions already exist and more are on the way. Short-vector SIMD on CPUs, like PowerPC AltiVec and x86 SSE, is a step in the direction of integrated vector capabilities. Mixing vector and scalar engines on the same dies, as has been done with the Cell BE processor, is another approach to making vector processing more mainstream. And as I wrote last week, coprocessor accelerators, like GPUs, FPGAs, and SIMD ASICs (ClearSpeed), are providing similar capabilities at a much more attractive price.
In the end, economics will choose how vector computing gets done. But the purveyors of proprietary solutions are on the wrong side of history. General-purpose commodity computing is not just here to stay, it's here to dominate.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - November 01, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.