Visit additional Tabor Communication Publications
March 26, 2012
EUDAT is a pan-European data project, bringing together a unique consortium of research communities and national data and high performance computing centers, aiming to contribute to the production of a collaborative data infrastructure (CDI) to support Europe’s scientific and research data requirements.
In Barcelona from March 7-8, EUDAT held its first user forum, providing an opportunity for 18 research communities across Europe to discuss their specific data requirements and expectations. At this forum, EUDAT unveiled a set of cross-disciplinary data services, designed to service all European research communities. The deployment of each service is being coordinated by multi-disciplinary task forces comprising representatives from user communities and data centers. EUDAT aims to deliver pilot services in 2012, with full services available to all research communities by the end of 2014.
But what are exactly these services and what benefits can user communities expect from them?
Although research communities from different disciplines have different ambitions and approaches, particularly with respect to data organization and content, they also share basic service requirements. This commonality makes it possible for EUDAT to establish shared pan-European data services, designed to support multiple research communities.
“The way data is organized differs from one community to the next,” says EUDAT Scientific Coordinator Peter Wittenburg, from the Max Planck Institute for Psycholinguistics at Nijmegen, the Netherlands. “EUDAT must acknowledge this heterogeneity as a starting point, while looking at the same time for some degree of integration through common solutions and services. For the CDI to succeed, an abstract architecture is required, allowing users’ pre-existing data solutions to be integrated with data centers that support common data services.”
There is strong demand among research communities for data replication services associated with better access to computing power. This demand underpins two of EUDAT’s common data services – safe data replication, and the ability to move data to and from HPC facilities. When combined, these services will constitute a fundamental component of the CDI:
The ‘safe replication’ service will enable data replication from one site to another, for example, from a scientifically oriented community center to a data center.
The service will be flexible as well as secure,” explains Mark van de Sanden, who supervises this work for EUDAT from the SARA computing center in the Netherlands. “It will allow, for example, users to ask for the creation of M replications of a data set, to be stored at different data centers for N years, with the possibility of excluding centers X to Z from the replication scheme. EUDAT has access to huge data storage facilities, provided by national data centers, and can use these to support research communities who are lacking a robust data infrastructure or who want multiple copies of data sets in geographically dispersed locations.”
Another strength of the EUDAT consortium is the massive amount of computing power available at European HPC centers, most of which are members of PRACE and among the most advanced supercomputing centers in the world. EUDAT will leverage the experience gained in DEISA and PRACE to build an infrastructure that can provide access to this computing power.
“Once users have their data replicated on the EUDAT infrastructure, we expect they will also want to use neighboring computing capacities to analyze that data,” says van de Sanden. “We are working on ways to move data between the EUDAT infrastructure and the HPC workspace.”
These services will be enormously beneficial to research communities, providing a storage solution coupled with access the most powerful computing machines in Europe. Large-scale research infrastructures (e.g., those arising from the ESFRI roadmap) will be able to use the EUDAT infrastructure to complement their own solutions, and smaller research communities will be able to rely on the EUDAT infrastructure for their data services, removing the need for large-scale capital investment in infrastructure development.
Complex problems or ‘grand challenges’ increasingly require a trans-disciplinary approach and rely on data from multiple research fields. In this context, making data from various disciplines available in one collaborative infrastructure is extremely beneficial. Thus there is widespread recognition, among communities that use data and those that fund e-infrastructures, that data federation must be improved. Improved data federation leads to better data preservation, optimized data access and increased usability, and such improvements facilitate data reuse in new contexts, across different communities and between disciplines.
To achieve these goals, data stored on the EUDAT infrastructure must be visible, readable, understandable, and easily accessible by all, especially those coming from a discipline different to that which created the original data.
Part of the challenge resides in the understanding of the data sets and finding good metadata solutions that allow data from different communities to be integrated in easily searchable collections. To this end, one of EUDAT’s tasks is to create a catalog that allows users to search stored data. User communities need to be heavily involved with this task, since they are the ultimate providers of metadata.
In collaboration with EPIC, EUDAT will also deploy persistent identification services, providing robust, highly available and high-performing systems that release persistent identifiers (PIDs) that in turn can be used within research communities, and the EUDAT CDI, to regulate data movement and search and query.
EUDAT’s prime objectives are to build services that are shared across disciplines, and to support cross-disciplinary data-intensive science. Despite this emphasis on commonality, some services can be tailored to a smaller subset of communities or even to individual researchers. EUDAT will host ‘community services,’ allowing user communities to use EUDAT resources to deploy and run specific services on the EUDAT infrastructure. Individual researchers will also be catered to, with a ‘simple store’ service that allows the storage and sharing of ‘small’ data that are not part of official data sets or collections, but are equally important for the advancement of research.
“If EUDAT is to stimulate cross-disciplinary research, it must become a major portal for scientific data. It must offer state-of–the-art services, not only to research institutions, but also to individual researchers, since they are the ultimate users of the infrastructure,” says Dr. Kimmo Koski, CSC Managing Director and EUDAT Project Coordinator. “Services developed as part of the CDI must be user-driven, which means intense collaboration with users is absolutely crucial. We know users have high expectations from EUDAT, and we are looking forward to meeting these expectations. There will be challenges along the way, but the path becomes much clearer thanks to these strong links with user communities.”
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.