Shortly after my brief diatribe about the future of proprietary vector systems (NEC Does Some Vector Addition), NEC offered to “educate” me face-to-face at SC07 about some of the advantages of their new SX-9 machines that I somehow neglected to mention in my commentary. They also promised to give me an overview of the company’s overall HPC strategy. How could I resist?
One of the points of contention from my original piece was my estimate for a fully tricked out 839 teraflop SX-9, which I had suggested would cost about a billion dollars. That estimate was based on a 39 teraflop system that was purchased by the German Weather Service (DWD) for 72 million dollars. According to NEC, the Germans actually purchased two such systems for that price — one for production and one for backup and research. The price tag also included a 20 percent European VAT (value added tax), a petabyte of storage and some scalar systems to manage the storage network. Based on that information, I’d estimate an 839 teraflop SX-9 would probably cost less than 500 million. Yet even at that price, NEC is unlikely to be selling any fully configured SX-9 machines in the near future.
Since Cray recently abandoned its vector product line in favor of its new scalar/vector/FPGA hybrid approach via the new XT5h, NEC remains as the lone vendor producing standalone vector machines. The SX-9 architecture currently boasts the highest memory bandwidth on the market — 4 terabytes per second per 16-CPU node. That’s 256 GB/sec for each 100 gigaflop CPU. With a maximum memory configuration of 1 terabyte per node, users have access to a lot of very fast, flat memory. This is one of the main features of SX-9 systems — and vector architectures in general — that makes them so attractive for certain types of workloads when compared to scalar architectures. For codes that are naturally suited to vector computing, it’s hard to beat these machines for pure sustained performance and user productivity.
However, as I alluded to in my previous commentary on this subject, that may not always be the case. Vector computing is in the process of becoming commoditized as SIMD units on CPUs and discrete GPUs are evolving toward a general-purpose data parallel capability. Right now, high-end vector computing is where high-end visualization was about 10-15 years ago, when SGI’s proprietary Onyx machines represented the state of the art. As graphics processing became common in personal computing, users realized that visualization was not a specialized requirement, but a general-purpose one, requiring standard commodity solutions. Vector computing is going through that process today.
For the time being, though, NEC remains fully committed to its vector supercomputing strategy. According to the company, they expect to continue the SX roadmap; they even plan to deliver the next generation SX-10 machine at some point. The scalar systems that NEC sells into the HPC space are its own Itanium-based enterprise servers and Xeon-based clusters. They are typically delivered alongside their SX machines to provide access to HPC storage. But unlike in Cray’s new XT5h systems, where vector and scalar computing are integrated as peers, NEC still sees their SX nodes as the stars of the show. As long as their European and Asian customers keep buying them, NEC has little reason to believe otherwise.
North America is a different story. Although NEC claims they have nine SX-6 systems on this side of the pond, I’ve yet to hear of one in operation. Today, the company has no illusions about selling its vector platform into Cray’s backyard. A 1996 court case involving the sale of four NEC SX-4 systems to the National Center for Atmospheric Research (NCAR) still casts a shadow on Japanese supercomputer imports. At the time, Cray charged that NEC was “dumping” their machines, using artificially low pricing to gain a foothold in the American market. The U.S. International Trade Commission upheld the charges and the Department of Commerce imposed stiff punitive tariffs on such importations, effectively killing competition in the domestic vector supercomputing market. Ironically, in 2001 Cray temporarily decided to resell NEC SX-5 machines, revoking its anti-dumping petition of five years earlier.
Today NEC has a new North American HPC strategy, which is based on a partnership between itself and Sun Microsystems. The recent history of the two companies was most evident in the collaboration on the Tokyo Tech TSUBAME supercomputer, where Sun provided most of the server and storage hardware and NEC acted as the system integrator. The two also worked together to build a supercomputer for Brazil’s National Institute of Space Research – Center of Weather Forecast and Climate Studies (CPTEC/INPE).
On Sept. 14, the two companies signed an agreement that “allows NEC to do work on behalf of Sun for Sun clients and Sun to promote and sell NEC professional services.” The agreement takes advantage of NEC software expertise in oil & gas, government research, manufacturing, and weather/climate applications, and leverages Sun’s reach into the American HPC market.
Down the road, NEC may be thinking of selling its SX machines alongside of Sun clusters to North American customers. That could offer an interesting alternative to Cray’s integrated vector-scalar hybrid approach. Such an arrangement doesn’t seem too far-fetched when you consider Sun’s recent enthusiasm for the supercomputing market. Maybe vector systems have a few good years left in them after all.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].