Visit additional Tabor Communication Publications
April 23, 2012
April 23 -- National High Performance Computing (HPC) organisations of Denmark, Norway, Sweden and Iceland have pooled resources and powered up an innovative joint supercomputer in Iceland. It is innovative not so much for its technology, but for its concept, placement and operations.
The computer is part of a pilot initiative aiming to test remote hosting, such that computing is brought to the energy source and not vice versa, as is the norm, thereby introducing substantial savings. Further aims are to understand the political, organizational and technical aspects of joint ownership, administration and operation of such expensive and strategic infrastructure. Due to growing power consumption, supercomputing costs are an increasing economic burden for researchers and their universities. Iceland is an attractive location, with powerful natural resources providing very low-cost electricity and cost-efficient cooling solutions.
Supercomputing – Expensive but necessary for science
High Performance Computing (HPC) enables advanced scientific calculation, simulation and modelling, which in turn, and to an increasing extent, is a precondition for much of the research and innovation that is fundamental for today’s knowledge driven economy. The Scandinavian countries spend millions of Euros every year on supercomputers and their electricity consumption. “Supercomputing has become fundamental for science and innovation, yet when the cost for hosting and operations is becoming comparable to the costs of hardware, and investments are increasing, we need to look into cost efficient solutions”, says Jacko Koster, director of UNINETT Sigma.
Added to this economic incentive is of course the environmental one. Supercomputers entail a large CO2- footprint when fossil energy sources are used. In Iceland, energy is produced not only at low cost but also from CO2-neutral renewable hydro- and geothermal energy sources. Due to Iceland’s geographical location, it is not feasible to transfer electricity to Europe. Hardware, however, can be moved, and so can data, via the trans- Atlantic fibre optic data-network infrastructure.
Cooperation – Joint investment and sharing infrastructure
In the long term, joint large scale procurements and energy efficient placement of supercomputers will be increasingly advantageous for the Scandinavian countries as well as to Iceland. It increases value for money as well as the possibility to develop new advanced competencies within shared operations of remote computing. “We need to constantly develop our understanding of advanced computing and how to operate it in increasingly complex ways”, says Ebba Þóra Hvannberg, director of the project and of Icelandic Supercomputing.
“We must continuously push the total cost of ownership down and increase the value for money”, adds Rene Belso, director of Danish Supercomputing, continuing: “Indeed, we Nordics need to be first movers in all such area, since we only seem to be able make a national business case out of the most complex organisation and advanced technology implementations”. Like with many other technology fields, e.g. environmental technologies, early public piloting can make the Nordics world leaders in related commercial fields.
Innovative for some; Controversial for others
The project is the result of collaboration between the Danish Center for Scientific Computing (DCSC), the Swedish National Infrastructure for Computing (SNIC), UNINETT Sigma and the University of Iceland. The compute facility will be hosted by Thor Data Center, now part of the Advania family of enterprises. Jacko Koster says that “If the pilot project is successful, successor projects may be defined in the coming years, e.g., for the joint procurement of larger supercomputers or specialized systems, which one country cannot afford alone. Possibly, such Nordic infrastructure can also be a joint contribution to European Infrastructure, like that of the Partnership for Advanced Computing in Europe (PRACE)”.
Such ideas are, understandably, not always shared by university computer centres, presently hosting the supercomputers. Indeed, they often argue the necessity of having the hardware close by, even at higher operations costs. “We do understand the concern of traditional computer centres, but maintain that they also regularly need to review their operations strategy, and think of cost efficiency. There will be a continuous need for complex supercomputing requiring close attention, experimentation and constant tweaking. Therefore, Nordic computer centres should focus on advanced operations and user support, not on hardware maintenance”, says Rene Belso. Sverker Holmgren adds, “Eventually, the aim is that national infrastructures for computational science in the participating countries can further increase focus on delivering high quality services and access to computational infrastructures for their users, whereas the more elementary aspects of the infrastructure (e.g., hosting of equipment) could be handed over to parties that can implement this in a more cost efficient manner, without compromising quality of service”.
The Supercomputer ‐ HP Bl280cG6 Servers
“Knowing that the project already consists of many complexities of a political, organizational and administrative nature, we aimed for a robust standard supercomputer architecture, useful to most researchers”, says Ebba Þóra Hvannberg. The system is being delivered by HP, via Opin Kerfi. It is based on a cluster of 288 HP ProLiant BL280c G6 servers with 3456 compute cores, achieving 35 TeraFLOPS of peak performance. Additionally it includes a 72 terabyte HP IBRIX X9320 Storage system. Project management, installation, implementation and
the testing of equipment was the responsibility of the Opin Kerfi. The solution fulfilled all the requirements set out in the project scope. Read more at http://nhpc.hi.is.
The Danish Center for Scientific Computing (DCSC) is a national research infrastructure under the Danish Ministry of Science, Innovation and Higher Education, providing Scientific or High Performance Computing as well as Distributed Computing infrastructure to Danish researchers who work with scientific calculations, simulations and modelling. Web: www.dcsc.dk; Mail: Rene Belso, firstname.lastname@example.org
The Swedish National Infrastructure for Computing (SNIC) is a national metacentre for high‐performance computing under the Swedish Research Council. SNIC is responsible for providing a balanced and cost‐efficient ecosystem of large‐scale computing and data storage resources for Swedish research. SNIC also participates in several international initiatives and projects on different aspects of computing and data storage. Web: http://www.snic.vr.se, Mail: Sverker Holmgren, Sverker.Holmgren@it.uu.se
About UNINETT Sigma
UNINETT Sigma coordinates the Norwegian procurement and operation of national equipment for advanced scientific computing for the Research Council of Norway, in collaboration with four universities in Oslo, Bergen, Tromsø and Trondheim. Its responsibilities include ensuring long‐term development of the infrastructure, including storage of scientific data. In addition, the company coordinates the Norwegian effort within grid infrastructure and represents Norway in international infrastructures and initiatives. Web: http://www.uninett.no/sigma; Mail: Jacko Koster, email@example.com
About The University of Iceland
The University of Iceland provides local representation in Iceland for the Nordic HPC members in regards to liaison with the Icelandic government and hosting providers in Iceland. This is done in close cooperation with DCSC, SNIC and UNINETT Sigma. The Computing Services of University of Iceland supervise the computer systems for the University of Iceland. Web: www.rhi.hi.is/en; Mail: Ebba Þóra Hvannberg, firstname.lastname@example.org
About The Advania Thor Data Center
The Advania Thor Data Center is a new, Tier3, highly secure and modular data center, located 10 minutes from Reykjavík City center, that specializes in flexible and high‐density hosting solutions. Taking advantage of Iceland's unique climate and natural resources, this 28,000 square foot facility is extremely energy efficient, completely emission free and therefore an attractive hosting option for companies in Europe and the US that are looking for an environmentally friendly data center facilities with reliant and cost effective hosting services. Web: http://www.thordc.com Mail: Benedikt Gröndal, email@example.com
About The Icelandic Ministry of Education, Science and Culture
The Icelandic Ministry of Education, Science and Culture, has a strategic policy interest in the project, and has contributed with coordination, liaison as well as funding. Web: http://eng.menntamalaraduneyti.is/ Mail: Hellen M. Gunnarsdóttir, Hellen.Gunnarsdottir@mrn.is
About Opin Kerfi
Opin Kerfi is a veteran system integrator which has consistently provided innovative and efficient services to clients, focusing on consultation, integration, operations and solutions within IT, communications and data centre sectors. Opin Kerfi is an HP Gold Partner, sole distribution and support centre for HP in Iceland, Cisco Silver Partner, Microsoft Distribution Partner and Microsoft Gold Partner. Web: http://www.ok.is/english; Mail: firstname.lastname@example.org.
Source: NHPC Project
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.