Visit additional Tabor Communication Publications
November 09, 2010
To be employed in maximising the performance of the Next-Generation Supercomputer (the "K computer")
TOKYO, Nov. 9 -- Fujitsu Limited and Fujitsu Laboratories of Europe Ltd today announced the launch of the Open Petascale Libraries (OPL) project, a global collaboration initiative to develop a mathematical library that will serve as a development platform for applications running on petascale-class supercomputers. Initially involving ten partners, including universities and research institutions, the project will make the developed code publicly available in open-source form, thereby contributing to the computational science community as a whole. In addition, the output from the OPL project will be applied to help accelerate the application development for the Next-Generation Supercomputer (the "K computer"), which is scheduled to begin operation in fiscal 2012. As a result, this project is expected to make an important contribution to a range of fi! elds, such as the life sciences, development of new materials and sources of energy, disaster prevention and mitigation, manufacturing technologies and basic research into the origins of matter and the universe.
The launch of the OPL project is scheduled to coincide with SC10, a conference bringing together supercomputer professionals from around the world, with the project's inaugural workshop to be held on November 14 in New Orleans, La.
Comment from Dr Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science:
"Science in the 21st century needs to contribute to the sustainability of human society and produce technologies that support individuals. Supercomputing today is an invaluable foundation for advancing science and technology, and the scientific and technological achievements and knowledge gained through supercomputing will benefit humanity on many fronts. International collaboration is also increasingly important. This project follows this direction, and we aim to participate actively and produce meaningful results."
Comment from Professor Jack Dongarra of the University of Tennessee:
"The OPL project is an important step in the right direction. Open software initiatives like this succeed at developing high-quality, standardised software and building new partnerships. Fujitsu's initiatives should be recognised as a significant advancement in the development process of petascale software and, more importantly, in collaborative communities to facilitate this development."
Open Petascale Libraries Project -- Aims and Objectives
The aim of the OPL project is to develop a mathematical library that will play an important role in each of the representative application areas for petascale supercomputers. Target systems for the library are the Next-Generation Supercomputer and x86 HPC clusters, which are standard systems used as supercomputers. The library's parallelisation will adopt a hybrid parallel programming model, which is effective for today's multicore supercomputers. By using the code generated through this project, it will be possible for application developers to maximise the performance of petascale supercomputers.
Within the OPL project, the mathematical library will be developed through collaboration with computational scientists and application developers as open-source software. Fujitsu and Fujitsu Laboratories of Europe Ltd, which possess an intimate knowledge of petascale supercomputers, will provide organisations participating in the project with technical information and a development environment. Furthermore, by making the code developed as part of the project publicly available in open-source form, it is expected to be widely employed in the broad range of fields in which petascale supercomputers are utilised.
The OPL project is being established by ten initial participating organisations, including universities and research institutions from Europe, the US, Asia and Oceania: The Society of Scientific Systems (Japan), The Australian National University (Australia), Imperial College London (UK), The Innovative Computing Laboratory at The University of Tennessee (US), The Numerical Algorithms Group (NAG) (UK), Oxford e-Research Centre (UK), The Science and Technology Facilities Council (UK), University College London (UK), Fujitsu Limited, and Fujitsu Laboratories of Europe Ltd. RIKEN (Japan) and The National Institute of Informatics (NII) (Japan) are expected to join in the near future. The OPL project is expected to attract the participation of additional organisations that agree with its goals and can contribute to achieving them.
An advisory panel has been established to provide guidance on the project's overall activities from both a technological and strategic standpoint. A number of key experts will participate in the panel, including Dr Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science, a newly formed international centre of excellence which aims to generate advanced scientific achievement and technical breakthroughs through use of the Next-Generation Supercomputer; Professor Jack Dongarra (of the University of Tennessee), one of the authors of the LINPACK, LAPACK, and ScaLAPACK mathematical libraries and one of the creators of the TOP500 supercomputer list; Professor Bill Gropp (of the University of Illinois), the creator of the PETSc parallel numerical library and one of the driving forces behind the development of MPI; and Professor Anne Trefethen (of the Univer! sity of Oxford), who is known for her contributions to both academic and commercial mathematical library design and development.
Background and Technological Challenges
Petascale supercomputers, as exemplified by the Next-Generation Supercomputer, are capable of quickly performing large-scale and advanced computations that are difficult to solve using normal computers. As such, they are vital tools for solving important issues facing society, including the development of new medicines and improved healthcare, the development of new materials and sources of energy, and strategies for disaster prevention and mitigation; for improving manufacturing technologies; and for basic scientific research including the origins of matter and the universe. In order to maximise the performance of petascale supercomputers -- which perform massive-scale parallel computations by linking tens of thousands of processors, each featuring many computational cores--it is necessary to develop applications that can efficiently coordinate hundreds of thousands of computational! cores and smoothly perform these parallel computations. As a result, it has become a significant challenge for computational scientists to develop these applications.
A powerful approach to overcoming this challenge is to develop a common mathematical library that can be employed by applications in each area to fully realise the potential performance of petascale supercomputers, and such a library is highly anticipated by many application developers. Developing a mathematical library for petascale supercomputer applications requires far deeper knowledge of computer architecture and applications compared to the mathematical libraries used in existing supercomputer applications, underlining the significance of the OPL project's collaborative approach.
OPL Project Website: http://www.openpetascale.org/
The Society of Scientific Systems: http://www.ssken.gr.jp/MAINSITE/ (Japanese only)
The National Institute of Informatics: http://www.nii.ac.jp/en/
The Australian National University: http://www.anu.edu.au/
Imperial College London: http://www3.imperial.ac.uk/
The Innovative Computing Laboratory at The University of Tennessee: http://icl.cs.utk.edu/
The Numerical Algorithms Group (NAG): http://www.nag.co.uk/
Oxford e-Research Centre: http://www.oerc.ox.ac.uk/
The Science and Technology Facilities Council: http://www.stfc.ac.uk/
University College London: http://www.ucl.ac.uk/
About Japan's Next-Generation Supercomputer
Japan's Next-Generation Supercomputer, the "K computer": Under the Ministry of Culture, Sports, Science and Technology's (MEXT) High Performance Computing Infrastructure initiative, Fujitsu has worked with RIKEN to develop a scalar parallel supercomputer. "K" is the nickname for the Next-Generation Supercomputer that was decided upon and announced by RIKEN in July 2010. "K" here draws upon the Japanese word "Kei" for 1016, representing the system's performance goal of 10 petaflops. In its original sense, "Kei" expresses a large gateway in Japanese, and it is hoped that the system will be a new gateway to computational science.
Fujitsu is a leading provider of ICT-based business solutions for the global marketplace. With approximately 170,000 employees supporting customers in 70 countries, Fujitsu combines a worldwide corps of systems and services experts with highly reliable computing and communications products and advanced microelectronics to deliver added value to customers. Headquartered in Tokyo, Fujitsu Limited (TSE:6702) reported consolidated revenues of 4.6 trillion yen (US$50 billion) for the fiscal year ended March 31, 2010. For more information, see www.fujitsu.com.
About Fujitsu Laboratories of Europe Limited
A leading research organisation, Fujitsu Laboratories of Europe is part of Fujitsu's global R&D network, with a dedicated division focused on high performance computing. Based in the UK, it acts as an important portal between technology and business, working to shorten the overall R&D cycle, en route to transforming future technologies into business realities. Fujitsu's technology roadmap is based on consistent R&D activity, in areas ranging from materials and devices, to networks, IT systems and solutions.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.