Visit additional Tabor Communication Publications
December 10, 2008
PRIMERGY-based system to run on next-generation Intel Xeon processors
TOKYO, Dec. 8 -- Fujitsu Limited today announced that it has received an order from Japan's Institute of Physical and Chemical Research, known as Riken, for a new supercomputer system with a peak theoretical performance of 108 teraflops, approximately 9 times the performance of the existing system.
The supercomputer will be a complex system comprised of three compute sub-systems. At its core will be a massively parallel cluster system consisting of 1,024 of the latest Fujitsu PRIMERGY compute nodes running on the next-generation Intel Xeon processors, code-named Nehalem. It is scheduled to begin operations in the middle of fiscal 2009.
Background on New Supercomputer System
As Japan's premier research institution in the natural sciences, Riken is pursuing research in a wide variety of areas including physics, engineering, chemistry, neuroscience, and life sciences. To support the progress of R&D in these fields, Riken has always utilized the optimum cutting-edge computing systems available.
The current system employs a Linux operating system, grid and Web technologies, as well as PC clusters, which have been traditionally used on a smaller scale in individual work divisions, to create a large-scale computing center considered both in and outside Japan as a model for next-generation computing centers. Riken has decided to upgrade its supercomputer system to support the enhancement of its research and development activities.
The new system will be an advanced version of the existing system and will deliver the computational performance required for research applications in fields such as life sciences and physics, where computing needs continue to grow. In addition, the new system will feature greater user-friendliness and higher operating efficiency.
The supercomputer will be a complex system comprised of three separate compute sub-systems, each with its own specific purpose: massively parallel calculations, large-scale memory calculations, and multipurpose calculations. In addition, each compute sub-system will be connected to a common front-end system, and disk and tape systems. As the core system, the massively parallel Fujitsu PRIMERGY cluster system will be comprised of 1,024 of the latest Fujitsu PRIMERGY compute nodes (2,048 CPUs, 8,192 cores).
The supercomputer's operating environment will integrate Parallelnavi, Fujitsu's HPC middleware, to create a seamless, one-system environment in which users are unaware of which compute sub-system they are accessing. A robust security environment will be designed to facilitate access to the system via the Internet.
1. Next-generation Intel Xeon processor to deliver high-speed performance
The massively parallel Fujitsu PRIMERGY cluster system serving as the core computational system will deliver high-speed performance through the adoption of the next-generation Intel Xeon processor, code-named Nehalem. Built into the processor is the new Intel QuickPath Interconnect technology that provides high-speed connections between microprocessors and external memory, and between microprocessors and the I/O hub, thereby greatly enhancing overall system performance. Further, the power-saving Fujitsu PRIMERGY compute nodes will reduce the energy consumption of the cluster system to one-seventh the current system in terms of consumption per teraflop.
2. Fujitsu's Parallelnavi HPC middleware to enable high operability
Fujitsu's Parallelnavi is a comprehensive HPC middleware providing all the software needed for a supercomputing environment, from application development tools required to build scientific computing programs, to job execution tools(program processing units), and operation management software. The main features include the following:
In addition, to enhance the operability of the complex system, Parallelnavi maintains file data uniformity and consistency for simultaneous high-volume access requests from the separate sub-systems. The theoretical performance is 12.8GB/S, eight times faster than the current system.
Comment from Riken's Ryutaro Himeno, Group Director, Research and Development Group:
"The existing Riken Super Combined Cluster was an R&D project in itself. I am very grateful for the proactive approach demonstrated by the staff from Fujitsu, which has resulted in high praise of the system."
"For the new system, we took the design concept of the existing one, expanded and upgraded it to handle the predicted increase in demand from life sciences research as well as an increase in data from high-energy physics experiments. I am counting on the high reliability of Fujitsu's hardware and software and the continuing support of Fujitsu's staff. We look forward to contributing to further progress in science and technology and the development of human resources through this information infrastructure center."
Research success using Riken's supercomputer system
Riken has supported a wide range of research activities in natural sciences using its Fujitsu supercomputer systems. Yoichiro Nanbu, the co-recipient of the 2008 Nobel Prize for physics, used the system in April 2008 to continue research on his groundbreaking principle of symmetry breaking.
Fujitsu is a leading provider of IT-based business solutions for the global marketplace. With approximately 160,000 employees supporting customers in 70 countries, Fujitsu combines a worldwide corps of systems and services experts with highly reliable computing and communications products and advanced microelectronics to deliver added value to customers. Headquartered in Tokyo, Fujitsu Limited (TSE:6702) reported consolidated revenues of 5.3 trillion yen (US$53 billion) for the fiscal year ended March 31, 2008. For more information, see http://www.fujitsu.com/.
Source: Fujitsu Limited
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.