Visit additional Tabor Communication Publications
December 10, 2008
PRIMERGY-based system to run on next-generation Intel Xeon processors
TOKYO, Dec. 8 -- Fujitsu Limited today announced that it has received an order from Japan's Institute of Physical and Chemical Research, known as Riken, for a new supercomputer system with a peak theoretical performance of 108 teraflops, approximately 9 times the performance of the existing system.
The supercomputer will be a complex system comprised of three compute sub-systems. At its core will be a massively parallel cluster system consisting of 1,024 of the latest Fujitsu PRIMERGY compute nodes running on the next-generation Intel Xeon processors, code-named Nehalem. It is scheduled to begin operations in the middle of fiscal 2009.
Background on New Supercomputer System
As Japan's premier research institution in the natural sciences, Riken is pursuing research in a wide variety of areas including physics, engineering, chemistry, neuroscience, and life sciences. To support the progress of R&D in these fields, Riken has always utilized the optimum cutting-edge computing systems available.
The current system employs a Linux operating system, grid and Web technologies, as well as PC clusters, which have been traditionally used on a smaller scale in individual work divisions, to create a large-scale computing center considered both in and outside Japan as a model for next-generation computing centers. Riken has decided to upgrade its supercomputer system to support the enhancement of its research and development activities.
The new system will be an advanced version of the existing system and will deliver the computational performance required for research applications in fields such as life sciences and physics, where computing needs continue to grow. In addition, the new system will feature greater user-friendliness and higher operating efficiency.
The supercomputer will be a complex system comprised of three separate compute sub-systems, each with its own specific purpose: massively parallel calculations, large-scale memory calculations, and multipurpose calculations. In addition, each compute sub-system will be connected to a common front-end system, and disk and tape systems. As the core system, the massively parallel Fujitsu PRIMERGY cluster system will be comprised of 1,024 of the latest Fujitsu PRIMERGY compute nodes (2,048 CPUs, 8,192 cores).
The supercomputer's operating environment will integrate Parallelnavi, Fujitsu's HPC middleware, to create a seamless, one-system environment in which users are unaware of which compute sub-system they are accessing. A robust security environment will be designed to facilitate access to the system via the Internet.
1. Next-generation Intel Xeon processor to deliver high-speed performance
The massively parallel Fujitsu PRIMERGY cluster system serving as the core computational system will deliver high-speed performance through the adoption of the next-generation Intel Xeon processor, code-named Nehalem. Built into the processor is the new Intel QuickPath Interconnect technology that provides high-speed connections between microprocessors and external memory, and between microprocessors and the I/O hub, thereby greatly enhancing overall system performance. Further, the power-saving Fujitsu PRIMERGY compute nodes will reduce the energy consumption of the cluster system to one-seventh the current system in terms of consumption per teraflop.
2. Fujitsu's Parallelnavi HPC middleware to enable high operability
Fujitsu's Parallelnavi is a comprehensive HPC middleware providing all the software needed for a supercomputing environment, from application development tools required to build scientific computing programs, to job execution tools(program processing units), and operation management software. The main features include the following:
In addition, to enhance the operability of the complex system, Parallelnavi maintains file data uniformity and consistency for simultaneous high-volume access requests from the separate sub-systems. The theoretical performance is 12.8GB/S, eight times faster than the current system.
Comment from Riken's Ryutaro Himeno, Group Director, Research and Development Group:
"The existing Riken Super Combined Cluster was an R&D project in itself. I am very grateful for the proactive approach demonstrated by the staff from Fujitsu, which has resulted in high praise of the system."
"For the new system, we took the design concept of the existing one, expanded and upgraded it to handle the predicted increase in demand from life sciences research as well as an increase in data from high-energy physics experiments. I am counting on the high reliability of Fujitsu's hardware and software and the continuing support of Fujitsu's staff. We look forward to contributing to further progress in science and technology and the development of human resources through this information infrastructure center."
Research success using Riken's supercomputer system
Riken has supported a wide range of research activities in natural sciences using its Fujitsu supercomputer systems. Yoichiro Nanbu, the co-recipient of the 2008 Nobel Prize for physics, used the system in April 2008 to continue research on his groundbreaking principle of symmetry breaking.
Fujitsu is a leading provider of IT-based business solutions for the global marketplace. With approximately 160,000 employees supporting customers in 70 countries, Fujitsu combines a worldwide corps of systems and services experts with highly reliable computing and communications products and advanced microelectronics to deliver added value to customers. Headquartered in Tokyo, Fujitsu Limited (TSE:6702) reported consolidated revenues of 5.3 trillion yen (US$53 billion) for the fiscal year ended March 31, 2008. For more information, see http://www.fujitsu.com/.
Source: Fujitsu Limited
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.