Visit additional Tabor Communication Publications
May 26, 2010
Japan's first world-class 2.4 petaflops system "Petakon" will begin operation in the fall
May 25 -- The Tokyo Institute of Technology (Tokyo Tech) announced that the TSUBAME 2.0 supercomputer, a green, cloud-based supercomputer system, will begin operation in the fall. NEC Corporation (NEC) and Hewlett-Packard (HP) have been selected to design the system.
TSUBAME 1.0, TSUBAME 2.0's predecessor, has supported the development of a variety of industrial and academic research projects in Japan and abroad for over four years. The Tokyo Institute of Technology's Global Scientific Information Center (GSIC) has taken advantage of the high performance scientific computing ability of the TSUBAME 1.0 HPC system for studying domestic and foreign computer systems for nearly two years.
The procurement process concluded on May 25, when the NEC-HP partnership's winning bid was announced. The theoretical maximum performance of the system is 2.4 petaflops, currently the world's fastest, improving by 30 times the performance of TSUBAME 1.0. The new supercomputer will be 12 times faster than Japan's current fastest, which is operated by Japan's National Atomic Energy Agency.
TSUBAME 2.0 will use GPGPU computing and will have a large solid-state drive (SSD). The system should achieve a top ranking on the TOP500 list. More importantly, the system should achieve a high placement on the DARPA HPC Challenge benchmark and on the Green 500 list. Groups which have achieved the best performance in scientific progress are given the recognition of the ACM Gordon Bell Prize.
The Tokyo Institute of Technology plans to provide advanced research training to its students as well as limited computing power to a small number of users. TSUBAME 1.0 provided world-class supercomputer power to many users in the industrial and academic communities, advancing science and technology goals.
The TSUBAME 2.0 supercomputer is equipped with cutting-edge technologies such as the latest Intel Westmere-EP and Nehalem-EX processors with "scalar operation," and will employ approximately 4,200 NVIDIA Fermi GPUs. This "mixed scalar-vector architecture" will achieve world-class computing.
The system has more than 1,400 compute nodes and uses Voltaire's QDR InfiniBand network. It uses the latest SSD technology and high-density mixed technology for the world's fastest total data I/O performance at 0.66 terabytes using DataDirect Networks storage technology.
The operating system will be a mix of Linux and Microsoft Windows HPC, and will also use virtual machine technology in order to take advantage of the flexibility of cloud hosting services.
NEC and HP collaborated with the university on the design of the system. Using high-density packaging technology means only 200 square meters of floor space will be necessary. The Strategic Creative Research Promotion Project (CREST), which takes part in the latest ultra low power HPC projects, included power-saving technology, temperature and fine-grained monitoring and control technology, with advanced cooling technology and application of green supercomputing (PUE = 1.277).
On June 16 at 11:00 a.m., the Tokyo Institute of Technology's Global Scientific Information Center will hold a press conference in conference room 2F to announce further technical details about the TSUBAME 2.0 supercomputer.
This announcement was based on a translation of a press release provided by GSIC. The original is available here in Japanese.
Source: the Global Scientific Information and Computing Center (GSIC) at the Tokyo Institute of Technology
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.