Visit additional Tabor Communication Publications
November 30, 2007
Shortly after my brief diatribe about the future of proprietary vector systems (NEC Does Some Vector Addition), NEC offered to "educate" me face-to-face at SC07 about some of the advantages of their new SX-9 machines that I somehow neglected to mention in my commentary. They also promised to give me an overview of the company's overall HPC strategy. How could I resist?
One of the points of contention from my original piece was my estimate for a fully tricked out 839 teraflop SX-9, which I had suggested would cost about a billion dollars. That estimate was based on a 39 teraflop system that was purchased by the German Weather Service (DWD) for 72 million dollars. According to NEC, the Germans actually purchased two such systems for that price -- one for production and one for backup and research. The price tag also included a 20 percent European VAT (value added tax), a petabyte of storage and some scalar systems to manage the storage network. Based on that information, I'd estimate an 839 teraflop SX-9 would probably cost less than 500 million. Yet even at that price, NEC is unlikely to be selling any fully configured SX-9 machines in the near future.
Since Cray recently abandoned its vector product line in favor of its new scalar/vector/FPGA hybrid approach via the new XT5h, NEC remains as the lone vendor producing standalone vector machines. The SX-9 architecture currently boasts the highest memory bandwidth on the market -- 4 terabytes per second per 16-CPU node. That's 256 GB/sec for each 100 gigaflop CPU. With a maximum memory configuration of 1 terabyte per node, users have access to a lot of very fast, flat memory. This is one of the main features of SX-9 systems -- and vector architectures in general -- that makes them so attractive for certain types of workloads when compared to scalar architectures. For codes that are naturally suited to vector computing, it's hard to beat these machines for pure sustained performance and user productivity.
However, as I alluded to in my previous commentary on this subject, that may not always be the case. Vector computing is in the process of becoming commoditized as SIMD units on CPUs and discrete GPUs are evolving toward a general-purpose data parallel capability. Right now, high-end vector computing is where high-end visualization was about 10-15 years ago, when SGI's proprietary Onyx machines represented the state of the art. As graphics processing became common in personal computing, users realized that visualization was not a specialized requirement, but a general-purpose one, requiring standard commodity solutions. Vector computing is going through that process today.
For the time being, though, NEC remains fully committed to its vector supercomputing strategy. According to the company, they expect to continue the SX roadmap; they even plan to deliver the next generation SX-10 machine at some point. The scalar systems that NEC sells into the HPC space are its own Itanium-based enterprise servers and Xeon-based clusters. They are typically delivered alongside their SX machines to provide access to HPC storage. But unlike in Cray's new XT5h systems, where vector and scalar computing are integrated as peers, NEC still sees their SX nodes as the stars of the show. As long as their European and Asian customers keep buying them, NEC has little reason to believe otherwise.
North America is a different story. Although NEC claims they have nine SX-6 systems on this side of the pond, I've yet to hear of one in operation. Today, the company has no illusions about selling its vector platform into Cray's backyard. A 1996 court case involving the sale of four NEC SX-4 systems to the National Center for Atmospheric Research (NCAR) still casts a shadow on Japanese supercomputer imports. At the time, Cray charged that NEC was "dumping" their machines, using artificially low pricing to gain a foothold in the American market. The U.S. International Trade Commission upheld the charges and the Department of Commerce imposed stiff punitive tariffs on such importations, effectively killing competition in the domestic vector supercomputing market. Ironically, in 2001 Cray temporarily decided to resell NEC SX-5 machines, revoking its anti-dumping petition of five years earlier.
Today NEC has a new North American HPC strategy, which is based on a partnership between itself and Sun Microsystems. The recent history of the two companies was most evident in the collaboration on the Tokyo Tech TSUBAME supercomputer, where Sun provided most of the server and storage hardware and NEC acted as the system integrator. The two also worked together to build a supercomputer for Brazil's National Institute of Space Research - Center of Weather Forecast and Climate Studies (CPTEC/INPE).
On Sept. 14, the two companies signed an agreement that "allows NEC to do work on behalf of Sun for Sun clients and Sun to promote and sell NEC professional services." The agreement takes advantage of NEC software expertise in oil & gas, government research, manufacturing, and weather/climate applications, and leverages Sun's reach into the American HPC market.
Down the road, NEC may be thinking of selling its SX machines alongside of Sun clusters to North American customers. That could offer an interesting alternative to Cray's integrated vector-scalar hybrid approach. Such an arrangement doesn't seem too far-fetched when you consider Sun's recent enthusiasm for the supercomputing market. Maybe vector systems have a few good years left in them after all.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - November 29, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.