Visit additional Tabor Communication Publications
June 04, 2009
Moving towards large-scale vector cloud computing
TOKYO, June 2 -- The Cyberscience Center, Tohoku University, the Cybermedia Center, Osaka University, National Institute of Informatics (NII) and NEC Corporation jointly announced today the successful demonstration of one of the world's fastest vector supercomputing environments by creating a single virtual system through the connection of two remotely located vector supercomputers on NAREGI (National Research Grid Initiative) middleware developed by NII.
Vector computers are suitable for carrying out large scale scientific computing such as fluid dynamics, structural dynamics computations, new material research and climate simulations with high computation efficiency, as well as being an important base for cutting-edge R&D and product design. Tohoku University has deployed 16 nodes of NEC's SX-9 supercomputer (maximum vector theoretical performance: 26.2 TFLOPS) and Osaka University has deployed 10 nodes of the SX-9 (maximum vector theoretical performance: 16.4 TFLOPS), each of which boasts a high speed connection to the other through SINET3 (Science Information NETwork 3).
NAREGI middleware enables large-scale computing resources at research and development centers scattered over a large area to be closely interconnected through high speed networks. These network connections can be viewed as a single massive virtual computer that efficiently implements large-scale parallel simulations, which were formerly difficult for individually isolated computer systems to carry out.
A new grid middleware component, the "GridVM for the SX Vector Computer," was developed by enhancing the existing capabilities of the NAREGI middleware, such as job management, information provision and resource usage control. The enhanced GridVM maintains high compatibility with the local job scheduler (NQS) on the SX-9, which enables the efficient use of vector computing resources even in the grid environment. Moreover, it permits the co-existence of conventional (non-grid) jobs and grid jobs, allowing the computing center to provide a pioneering new cloud-computing service.
In this demonstration experiment, a parallelized electro-magnetic field simulation program was run by interconnecting two SX-9 systems both at Tohoku University and Osaka University with the use of parallel programming libraries for shared memory and distributed memory. As the first step in establishing the cloud computing environment, the virtualization of the computing resources at both centers was made by integrating the newly developed GridVM for the SX-9 into the NAREGI middleware. Furthermore, it was successfully demonstrated that running jobs is possible with the automatic, selective allocation of jobs between two supercomputers in accordance with the system load status of each system, making the maximum utilization of resources over the entire grid possible.
Looking forward, these organizations will continue their efforts to realize a vector-based cloud computing environment as a new academic information infrastructure that allows the overall application software to run efficiently with enhanced usability and reduced cost through cooperation with many organizations that possess vector computers.
As a result, it is expected that an advanced scientific computing environment can be established with the following new services:
This research has been conducted as part of the establishment of the Cyber Science Infrastructure promoted by the National Institute of Informatics.
About the NAREGI Middleware System
NAREGI makes fundamental building blocks in the Cyber Science Infrastructure (CSI), and its goal is to provide a large-scale computing environment for widely-distributed, advanced research and education (the Science Grid). NAREGI, the National Research Grid Initiative, was launched in 2003 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). From 2006 through 2007, the research and the development were continued under the "Science Grid NAREGI" Program of the "Development and Application of Advanced High-performance Supercomputer project" being promoted by MEXT. The NAREGI Grid Middleware Ver. 1.0 was released in 2008. NII endeavors to build the grid infrastructure by continuing software maintenance and user support services.
About Tohoku University
Tohoku University is committed to the "Research First" principle and "Open-Door" policy since its foundation, and is internationally recognized for its outstanding standards in education and research. The university contributes to world peace and equity by devoting itself to research useful in the solutions of societal problems and for the education of human resources in the capacities of leadership.
About Osaka University
Osaka University has evolved from its former incarnation since the semi-privatization of what were known as Japan's national universities, stepping forward in a new direction. The university has from the beginning inherited the spirit of the citizens of Osaka present in the university's founding institutions, Kaitokudo and Tekijuku, which were deeply rooted in the city of Osaka. With this spirit, Osaka University has always and across generations answered to the needs and issues of society, under the axiom "Live Locally, Grow Globally." As the world faces a new period in history, Osaka University shall use the opportunity afforded it through semi-privatization, and in looking forward to a future of substantial development, reaffirm those principles on which it firmly stands.
About the National Institute of Informatics
As Japan's only general academic research institution seeking to create future value in the new discipline of informatics, the National Institute of Informatics (NII) seeks to advance integrated research and development activities in information-related fields, including networking, software and content. These activities range from theoretical and methodological work to applications. As an inter-university research institute, NII promotes the creation of a state-of-the-art academic-information infrastructure (the Cyber Science Infrastructure, or CSI) that is essential to research and education within the broader academic community, with a focus on partnerships and other joint efforts with universities and research institutions throughout Japan, as well as industries and civilian organizations.
About NEC Corporation
NEC Corporation is one of the world's leading providers of Internet, broadband network and enterprise business solutions dedicated to meeting the specialized needs of a diversified global base of customers. NEC delivers tailored solutions in the key fields of computer, networking and electron devices, by integrating its technical strengths in IT and Networks, and by providing advanced semiconductor solutions through NEC Electronics Corporation. The NEC Group employs more than 150,000 people worldwide. For additional information, visit the NEC Web site at http://www.nec.com.
Source: NEC Corp.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.