Visit additional Tabor Communication Publications
February 03, 2010
BRUSSELS, Feb. 3 -- IDC, working with supercomputing experts from Teratec, France, Daresbury Laboratory, UK, Leibniz-Rechenzentrum, Germany, and Forschungszentrum Jülich, Germany, has been awarded a contract by the European Commission to develop a strategic agenda for high performance computing (HPC) in Europe.
The study will provide the research and analysis needed to increase the HPC capabilities available for the advancement of open science and to increase the competitiveness of the European Union in the supply and use of HPC systems. The study has a mandate to look at the key strategic developments in HPC through to 2020, as well as examining the investments, structures, and coordination needed to develop supercomputing e-infrastructures across Europe.
Commenting on the study's purpose and scope, Chris Ingle, associate vice president of Consulting at IDC, said, "This is a critical moment for high performance computing leadership. Europe has been a leader in this field in the past and, with the right investments, can continue to develop a strong HPC industry and benefit from the use of HPC in science and throughout society."
Gabriella Cattaneo, director of Competitiveness & Innovation Policies & Strategies, Europe, at IDC Government Insights, added, "A policy agenda for HPC will help support the European Commission's goal to develop EU ICT infrastructures for e-science, strengthening the European scientific research and high-tech capabilities."
The research contract requires a detailed comparative analysis of HPC investments and funding structures globally, as well as the impact of HPC on scientific and industrial leadership. Earl Joseph, program vice president of IDC's Technical Computing group noted that "HPC investment, and the associated productivity gains and the resulting research leadership from that investment, has become critical in many countries. Although the U.S. and Japan have vied for supercomputing performance leadership over the past few years, other countries are quickly developing their own HPC industries and capabilities in order to increase their economic competitiveness and scientific leadership."
Developing a view of the technologies that are needed for a successful HPC strategy is critical to this project. Steve Conway, research vice president in IDC's Technical Computing group, added, "The link has been firmly established between HPC and scientific and economic advancement. The investments needed for the next generation of HPC systems will be substantial. Deciding on the optimal areas of investment -- systems, storage, software, and people skills -- that are most valuable to European HPC users, and the wider economy, is critical to the EU's success in developing its HPC agenda. Many countries are installing multiple petascale supercomputers today and are laying the foundation for using exascale systems in a few years."
The study is scheduled to be a seven month contract, which will provide policymakers with an analysis of the HPC industry from 2010-2020, a view on the technology requirements from the HPC industry in 2020, and a strategic agenda for HPC in Europe.
IDC is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. IDC helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy. More than 1,000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. For more than 46 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world's leading technology media, research, and events company. You can learn more about IDC by visiting www.idc.com.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.