Visit additional Tabor Communication Publications
October 06, 2006
An undeclared race towards petaflop computing is in progress between the United States and Japan -- a race which is being closely watched by the global HPC community. Right now the scales lean towards the U.S., which leads with its latest IBM Blue Gene/L computer, a 280 teraflops (sustained) system. The IBM machine took the number one spot from Japan's Earth Simulator in 2004, which had dominated the supercomputing charts since 2002.
Experts are expecting the first petaflop system within the next couple of years. The bets are that it will be a follow-on of the IBM design. However, Japan is not to be discounted. As the first and only country having specified supercomputers as "Key Technology of National Importance," Japan is aiming at becoming the world leader in simulation capabilities in areas covering nano-science, life science, climate/geo-science, physical science and engineering. Unburdened by the responsibility for nuclear stockpile stewardship, it can focus its research and financing on providing a petaflop platform for real-world applications.
These efforts are harnessed by the RIKEN institute, which together with leading industries and universities has set up an organization that targets the development of a 10 petaflop system within the next six years. On September 19th, RIKEN issued a press release which officially declared these intentions. Back in April 2006, a research collaboration started in Japan to define the best possible architecture for such a system, based on a benchmark consisting of 21 real-world applications. Using these benchmarks, two candidates for such an architecture have now been selected for further design evaluation. They have been put forward by Fujitsu Ltd. and a team formed by NEC Corporation and Hitachi, Ltd. The results of this final evaluation will be available at the end of this fiscal year and will become the basis of the implementation. On September 19th and 20th, RIKEN held a seminar at which the announcement was made.
Taking advantage of a visit to Bonn, Germany to give a keynote lecture at a scientific conference, Dr. Mitsuyasu Hanamura, who heads the applications software group within the RIKEN Next-Generation Supercomputer R&D Center, took part in a press briefing organized by the NEC Europe Computing & Communication Research lab in St. Augustin, Germany. Dr. Hanamura, gave a technical summary of this subject.
The Next-Generation Supercomputer Project, as it is called within Japan, is tasked to support six distinct goals:
To reach these goals, the new machine will enable access for researchers and industries through the cyber science infrastructure framework of the National Research Grid Initiative (NAREGI) project initiated by the National Institute of Informatics (NII).
According to Dr. Hanamura, because of prohibitive power consumption, the new class of supercomputers will need technology breakthroughs. Based on reasonable projections until 2010 on compute-power per CPU, efficiency-factors and power consumption, as well as the need to support existing codes, he gave an estimate for a hypothetical one petaflop (sustained) system:
CPU Type Peak Perf. Efficiency Est. Power SW Support
-------- ---------- ---------- ---------- ----------
Vector 63 GF/CPU 0.3 47 MW good
Scalar 30 GF/CPU 0.1 40 MW good
Special- n.a. 0.5 ~0.5 MW poor
This data clearly points towards a mixed hardware environment in order to be able to reach both high performance and the support of existing application code. As an example for special purpose hardware he pointed to RIKEN's MD-GRAPE3 machine, a special-purpose computer geared for molecular dynamics and multi-body calculations. In May 2006, a system based on this chip already achieved a performance level of over one petaflop. Therefore Dr. Hanamura foresees an architecture which combines scalar nodes, vector computers and special purpose computers into a single system. As multi-scale simulations often need to consider both particle-based and domain-based effects, which lend themselves naturally to different computing models, this new architecture should be well suited here.
The tentative schedule of the project is
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.