Visit additional Tabor Communication Publications
April 04, 2012
TOKYO, April 4 -- Fujitsu today announced that the Research Institute for Information Technology at Kyushu University has placed orders for a new system, consisting of Supercomputer System and High-performance Computational Server System.
The Supercomputer System will use a configuration of Fujitsu's PRIMEHPC FX10 nodes, and the High-performance Computational Server System will employ a cluster configuration of PRIMERGY CX400 x86 servers. Combined, both systems will achieve a total theoretical peak performance of 691.7 teraflops, making the new supercomputer system the largest-scale system in the Kyushu region of Japan.
The new supercomputer system will roll out operations beginning in July 2012. It will be used to support the Research Institute for Information Technology's advanced research and educational activities in a variety of fields of science and technology. It is also expected to be used by corporations.
Background to the Deployment of the New System
Kyushu University's activities focus on offering education and research in Japan's Kyushu region, where it is the largest national university. The Research Institute for Information Technology is a shared facility available for use by university faculty, graduate students and other researchers from across Japan in their academic research. Since 2007, the Research Institute for Information Technology has utilized a supercomputer system employing Fujitsu's PRIMEQUEST and PRIMERGY servers. In light of recent industry trends, however, in which world-class massively parallel computers have been deployed in Japan, the university has been planning to upgrade its systems and application development environments to enable them to perform calculations of even greater scale.
The Research Institute for Information Technology chose Fujitsu's supercomputer system for its superior computing performance, energy efficiency, execution performance and availability. It can also be employed to develop and optimize applications for use with the K computer supercomputer.
Overview of the New System
The calculation nodes of the new system will use a configuration of 768 PRIMEHPC FX10 nodes and 1,476 PRIMERGY CX400 nodes, thereby achieving a total theoretical computational speed of 691.7 teraflops. As a result, the new supercomputer system is anticipated to be the largest-scale system in Kyushu, and one of the handful that exists throughout the country.
Combining high performance, scalability, and reliability with superior energy efficiency, PRIMEHPC FX10 further enhances Fujitsu's technology used in the K computer, which achieved the world's top-ranked performance. PRIMERGY CX400 is a high-density server that can support 84 nodes per rack, or roughly twice the number of nodes of conventional 1U rack servers, making it the ideal x86 server for high performance computing.
Main Configuration of the New Supercomputer System
PRIMEHPC FX10 Compute Nodes
Number of racks 8
Nodes (CPUs)768 (768)
Theoretical Peak Performance181.6 teraflops
Log-in Nodes Management Servers 22 PRIMERGY RX300S7 servers
Management servers shared discs2 ETERNUS DX80S2 storage units
Local Files 25 ETERNUS DX80S2 storage units
Shared Files 17 ETERNUS DX80S2 storage units
High-Performance Server System
PRIMERGY CX400 Compute Nodes
Nodes (CPUs) 1,476 (2,952)
Theoretical Peak Performance510.1 teraflops
Memory Capacity 184.5TB
Log-in Nodes Management Servers44 PRIMERGY RX300S7 servers
Management servers shared discs 2 ETERNUS DX80S2 storage units
Shared Files74 ETERNUS DX80S2 storage units
File System FEFS
For its HPC middleware, the system will deploy Technical Computing Suite for peta-scale systems together with 66 PRIMERGY series servers as login nodes. ETERNUS storage systems, combined for a total capacity of 4.6 petabytes, will be deployed for storage. The system's file system will be constructed using the high-capacity, high-performance and highly reliable FEFS distributed file system.
Comment from Mutsumi Aoyagi, Director, Research Institute for Information Technology
Many of our center's users are Japan's top researchers, and they are also the K computer users. As an organization providing resources for the High-Performance Computing Infrastructure (HPCI) initiative, which began full-fledged operations this fiscal year, we hope to contribute to the further development of Japan's computational science capabilities by deploying PRIMEHPC FX10, which is highly compatible with the K computer, and the highly energy-efficient and high-density PRIMERGY CX400.
Later this year, we plan to equip the system with high-performance GPGPUs, which we anticipate will enable a dramatic improvement in the performance of applications in areas such as computational fluid dynamics and molecular science. Moreover, the High-performance Computational Server System will include a visualization server equipped with remote screen sharing functionality and high-capacity memory, as well as a variety of visualization tools. This will make it possible to perform pre/post-processing of computations for massive volumes of data.
About Fujitsu Limited
Fujitsu is a leading provider of information and communication technology (ICT)-based business solutions for the global marketplace. With approximately 170,000 employees supporting customers in over 100 countries, Fujitsu combines a worldwide corps of systems and services experts with highly reliable computing and communications products and advanced microelectronics to deliver added value to customers. Headquartered in Tokyo, Fujitsu Limited reported consolidated revenues of 4.5 trillion yen (US$55 billion) for the fiscal year ended March 31, 2011. For more information, please visit www.fujitsu.com .
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.