In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” when it comes online this summer (2017). Projections are that it will deliver 12.2 double-precision petaflops and 64.3 half-precision (peak specs).
Nvidia was the first vendor to publicly share the news in the US. We know that Nvidia will be supplying Pascal P100 GPUs, but the big surprise here is the system vendor. The Nvidia blog did not specifically mention HPE or SGI but it did include this photo with a caption referencing it as TSUBAME3.0:
That is most certainly an HPE-rebrand of the SGI ICE XA supercomputer, which would make this the first SGI system win since the supercomputer maker was brought into the HPE fold. For fun, here’s a photo of the University of Tokyo’s “supercomputer system B,” an SGI ICE XA/UV hybrid system:
TSUBAME3.0 is on track to deliver more than two times the performance of its predecessor, TSUBAME2.5, which ranks 40th on the latest Top500 list (Nov. 2016) with a LINPACK score of 2.8 petaflops (peak: 5.6 petaflops). When TSUBAME was upgraded from 2.0 to 2.5 in the fall of 2013, the HP Proliant SL390s hardware stayed the same, but the GPU was switched from the NVIDIA (Fermi) Tesla M2050 to the (Kepler) Tesla K20X.
Increasingly, we’re seeing Nvidia refer to half-precision floating point capability as “AI computation.” Half-precision is suitable for many AI training workloads (but by no means all) and it’s usually sufficient for inferencing tasks.
With this rubric in mind, Nvidia says TSUBAME3.0 is expected to deliver more than 47 petaflops of “AI horsepower” and when operated in tandem with TSUBAME2.5, the top speed increases to 64.3 petaflops, which would give it the distinction of being Japan’s highest performing AI supercomputer.
According to a Japanese-issue press release, DDN will be supplying the storage infrastructure for TSUBAME 3.0. The high-end storage vendor is providing a combination of high-speed in-node NVMe SSD and its high-speed Lustre-based EXAScaler parallel file system, consisting of three racks of DDN’s high-end ES14KX appliance with capacity of 15.9 petabytes and a peak performance of 150 GB/sec.
TSUBAME3.0 is expected to be up and running this summer. The Nvidia release notes, “It will used for education and high-technology research at Tokyo Tech, and be accessible to outside researchers in the private sector. It will also serve as an information infrastructure center for leading Japanese universities.”
“NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems,” said Tokyo Tech Professor Satoshi Matsuoka, who has been leading the TSUBAME program since it began.
“Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy and transportation.”
We remind you the story is still breaking, but wanted to share what we know at this point. We’ll add further details as they become available.