Paving the Way for Accelerated Data Sharing: An Interview with Francine Berman

February 27, 2015

How can we create more effective treatments for Alzheimer’s? Can we increase food security across the globe? Is there a way to more accurately predict natural disasters? Solutions to these and other critical challenges are being advanced through the sharing and exchange of research data, and the efforts of the Research Data Alliance (RDA). In this Q&A, Dr. Francine Berman, chair of the Research Data Alliance/United States, comments on the organization’s past, present and future as a prelude to its 5th Plenary Meeting, taking place next week in San Diego, California.

Briefly tell us about the Research Data Alliance. Why was it created?

Dr. Francine Berman: The Research Data Alliance (RDA) was founded in 2013 as an international organization whose purpose was to facilitate the development, coordination and use of infrastructure that supports data sharing and exchange. This is tremendously important for the research community where the development of technical and social infrastructure such as interoperability frameworks, registries, policy, practice and community standards are needed to utilize and coordinate today’s wealth of data to address complex science and societal challenges.

How do RDA members collaborate?

RDA’s members collaborate through working and interest groups. Interest groups are open-ended and focus on issues around the development and use of data infrastructure. Working group are short-term (12-18 months) and come together to develop and implement specific tools, policy, practices and products that are adopted and used by projects, organizations, and communities. Interest groups often spawn off one or more working groups when individual pieces of infrastructure that need to be developed are identified.

To keep in synch, RDA members meet face to face through Plenaries – three-day meetings held twice a year in various locations worldwide. These are both working meetings, where members meet and advance the efforts of interest and working groups, and community meetings with speakers of interest and updates from funding agencies and policy makers.

With the Research Data Alliance celebrating its second anniversary as an international organization, has the organization’s mission changed or remained the same?

RDA’s mission remains the same and the RDA community is even more interested in making an impact through the development of infrastructure. Last fall, RDA’s first working groups produced the organization’s first set of RDA deliverables; more are set to be delivered this year. Our first deliverables are giving the RDA community an opportunity to assess what is needed for adoption beyond the users embedded in the working groups. We are really looking forward to our first “Adoption Day” for the RDA community hosted by SDSC on March 8.

What type of growth has the RDA experienced since 2013?

RDA had its first “pre-meeting” in Washington, D.C. in the fall of 2012. About 100 people showed up. Today, RDA has more than 2,600 individual members from over 90 countries and in all sectors. A number of organizations have become organizational members as well. Many RDA members meet both at the international plenaries and as RDA “regional” communities. Regional groups in Europe (RDA/EU), Australia (RDA/AU) and the U.S. (RDA/US) are all active and new regional groups are also coming on board. As RDA celebrates its second birthday, the organization is working closely with a broad set of countries, communities, and agencies to expand both the RDA community and organizational infrastructure to include new participants and partners in Japan, Brazil, Canada, South Africa, and other countries.

What is the U.S. RDA region doing?

In the last year, RDA/US has been focusing on three pilot initiatives: outreach, adoption, and student and early career professional engagement. In the outreach area, RDA/US members have been working with organizations in a variety of domains to bring their issues to existing RDA working and interest groups or to create new ones around their infrastructure needs. In particular, the outreach effort is focusing on helping other groups utilize RDA as a vehicle to develop data sharing infrastructure within the U.S.

In the adoption area, RDA/US has developed the Adoption Day program and assisted specific groups in enhancing their own infrastructure through the adoption of RDA deliverables. In the student and early career professional area, RDA/US has funded students to work with RDA interest and working groups, broadening their own technical knowledge and professional networks during the process.

All of these initiatives have been funded by the National Science Foundation and have been important pilots for the community. The student and early career effort was just funded by the Sloan Foundation as a larger program. Beth Plale, Inna Kouper and Kathy Fontaine are now heading that up that activity. Under the leadership of Larry Lannom, RDA/US has also co-sponsored workshops and developed partnerships with CENDI, the National Data Service, the Sustaining Digital Repositories group, and others. These collaborations are creating important linkages within the U.S. data community that can advance the development of needed infrastructure and help us address U.S. data challenges.

Can you describe some of RDA’s deliverables and users?

Absolutely. RDA’s working groups were conceptualized as “tiger teams” that combine members who can build infrastructure with users who need the infrastructure to share data and get their work done. The purpose of RDA infrastructure deliverables is to enable impact.

One of RDA’s first working groups was the Data Type Registries group. Formed at the first RDA Plenary in early 2013, this group’s objective was to make it easier to create machine-readable and researcher-accessible registries of data types that support the accurate use of data, as unclear typing of data can make data open to misinterpretation and limit its usefulness. For more than a year, this working group collaborated together to develop its model and an implementation. The infrastructure products of this group are being adopted by European Data Infrastructure (EUDAT), the National Institute of Standards and Technology in the U.S., and additional groups who are applying it to their own research activities.

In contrast, RDA’s Wheat Data Interoperability working group is still in process. This group’s objective is to build an integrated wheat information system for the international wheat community of researchers, growers, breeders, etc. Such a system is critical to advance and sustain wheat data information sharing, reusability and interoperability. The working group includes members from the French National Institute for Agricultural Research, the International Maize and Wheat Improvement Center, and other agriculture-related organizations.

What can we expect at RDA’s Plenary in San Diego?

RDA/US will be hosting the fifth RDA Plenary in San Diego on March 9-11 with Adoption Day on March 8. We’ll welcome a worldwide community of members as well as organizational partners, funders, students, and local colleagues from San Diego. We will have a welcome by Jim Kurose, the new Assistant Director of NSF’s Computer and Information Science and Engineering Directorate and 3 keynotes that span the data landscape. Margaret Leinen, Director of the Scripps Institute of Oceanography will speak about ocean data, Stephen Friend, head of Sage Bionetworks will talk about open data commons and patient engagement, and Nao Tsunematsu will talk about data policy in Japan.

The meeting will also be a working meeting with much time spent in open working groups and interest groups. This is a great time for new members to experience the RDA discussions, join a group, talk to RDA members about starting a new group to focus on their own issues of interest, or explore organizational membership. We’ll also have panels with funders and plenary sessions around the digital humanities and other topics. It’s a full schedule but we still left time for a beach party.

What’s next for RDA?

The importance of data sharing to support innovation is increasing and RDA will continue to focus on the development, coordination and use of this infrastructure. RDA has also emerged as a neutral “town square” in which organizations can come together to develop common agendas – each meeting seems to attract other meetings that can benefit from co-location with the RDA community. In Washington at Plenary 2, the data citation community came together. During the San Diego meeting at Plenary 5, the Preservation and Archiving Special Interest Group will convene to discuss trends and issues related to digital preservation.

Most important is that RDA continues to provide a vehicle for getting things done and accelerating data sharing. The first two years have created a great culture for doing this. Our hope is that the next years accelerate and improve RDA’s usefulness and impact.

About Francine Berman

Francine Berman is Chair of Research Data Alliance / United States and co-Chair of the RDA Council. She is the Edward P. Hamilton Distinguished Professor of Computer Science at Rensselaer Polytechnic Institute (RPI).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are already ensconced at the venue. In any case, you're busy, so he Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the GFS – its first new dynamical core in nearly 40 years – w Read more…

By Oliver Peckham

NCSU Researchers Overcome Key DNA-Based Data Storage Obstacles

June 12, 2019

In the race for increasingly dense data storage solutions, DNA-based storage is surely one of the most curious – and a team of North Carolina State University (NCSU) researchers just brought it two steps closer to bein Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Transforming Dark Data for Insights and Discoveries in Healthcare

Healthcare in the USA produces an enormous amount of patient-related data each year. It is likely that the average person will generate over one million gigabytes of health-related data across his or her lifetime, equivalent to 300 million books. Read more…

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are alr Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the G Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

The Spaceborne Computer Returns to Earth, and HPE Eyes an AI-Protected Spaceborne 2

June 10, 2019

After 615 days on the International Space Station (ISS), HPE’s Spaceborne Computer has returned to Earth. The computer touched down onboard the same SpaceX Dr Read more…

By Oliver Peckham

Building the Team: South African Style

June 9, 2019

We’re only eight days away from the start of the ISC 2019 Student Cluster Competition. Fourteen student teams from eleven countries will travel to Frankfurt, Read more…

By Dan Olds

Scientists Solve Cosmic Mystery Through Black Hole Simulations

June 6, 2019

An international team of researchers has finally solved a long-standing cosmic mystery – and to do it, they needed to produce the most detailed black hole simulation ever created. Read more…

By Oliver Peckham

Quantum Upstart: IonQ Sets Sights on Challenging IBM, Rigetti, Others

June 5, 2019

Until now most of the buzz around quantum computing has been generated by folks already in the computer business – systems makers, chip makers, and big cloud Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This