XSEDE14 Workshop Wrestles with Reproducibility

By Faith Singer-Villalobos

August 19, 2014

Imagine that you are trying to create a new sauce for a special dish, or the perfect adhesive for a new aircraft, or you’re flying a helicopter looking for victims of a natural disaster — and you succeed at each of these. This is wonderful news for your dinner guests, or the company that will use the new adhesive, and especially for the victims of the natural disaster. But the question is — Could you do it again and get the same results? Or, did you just get lucky the first time?

At the XSEDE14 conference in Atlanta, a roomful of computational veterans from inside and outside the NSF Extreme Science and Engineering Discovery Environment (XSEDE) participated in a full-day workshop on the topic of reproducibility, and clearly, there is a lot at stake.

“There is a growing awareness in the computational research community that this question of ‘can we do it again’ is becoming important for us in new ways, and the stakes are high — computational research is helping to save lives, answering policy questions, and making an impact on the world,” said Doug James, an HPC researcher at the Texas Advanced Computing Center, in his opening remarks for the workshop.

People have been thinking about reproducibility for a long time – it is one thing to reproduce a small scale lab experiment, or a computation on your desktop, but it is an entirely different matter to reproduce something that the Hubble Space Telescope did over five years at the cost of hundreds of millions of dollars, for example.

So, what is reproducibility? One working definition might resemble this: the ability to repeat an experiment to the degree necessary to assess the correctness and importance of the results. Practices that promote reproducibility include anything that makes a researcher more organized, provides a better audit trail, allows a researcher to track source code, and to know what data sources were used.

Victoria Stodden of Columbia University, who led a roundtable on the topic of reproducibility in 2009 and an ICERM workshop on Reproducibility in Computational and Experimental Mathematics in 2012, gave the keynote address at the XSEDE14 workshop. She raised the issue of a credibility crisis.

“Reproducibility has hit the popular press over the last several months,” Stodden said, citing recent coverage by The Economist (October 2013) and editorials in Nature and Science. Issues around the importance of reproducibility were catalyzed by the clinical trials scandal at Duke University in computational genomics where mistakes in the research were uncovered in 2010 in The Cancer Letter.

“This really goes to the heart of how important reproducibility issues are, and how we need to reconstruct the pipeline of thinking, reasoning and observation that a scientist does, but for the computational aspects, too, where many of these decisions are being manifest.”

Stodden also touched on separate discussions going on regarding different aspects of reproducibility such as statistical reproducibility, which questions the research decisions about the statistics and data analysis, and empirical reproducibility, which focuses on the reporting standards for the physical experiment, but does not focus on the computational steps.

Everyone in the room agreed that computational research is now in a position where complexity and mission criticality take on new import, and the community needs to develop confidence in the results of that research. But what should our priorities be? Training? Better tools? New steps in proposals and submissions?

NCSA Director Ed Seidel shared his view that there are three levels where things have to happen to get momentum moving in right direction: 1) campus level; 2) national level; and 3) publisher level.

Seidel said that local campuses have to think about how they can begin to support local data services, not just repositories, so there is a local structure. “This is a policy issue that vice chancellors for research and provosts need to take seriously…and there are organizations in place like Internet2 and Educause that span the research universities across the country that can help,” Seidel said. “It’s important to frame it not just as data but more around reproducibility; scope the problem beyond data and the data infrastructure.”

In addition, Seidel cited the XSEDE initiative as being a good organization for aiding the reproducibility process. XSEDE was instrumental in starting the National Data Service Consortium, aimed at organizing a number of individual efforts for data services around tools to create data collections to get Digital Object Identifiers or ‘DOIs’ associated with them and to provide linking services to publishers. While typically thought of as pointers to data collections, DOIs can also attach to code. This is a crucial part of reproducibility.

Professional societies and journals can play a part as well. Many are starting to require links to the data referenced in a publication. But reproducible practices must start in the research group.

Victoria Stodden, Assistant Professor, Department of Statistics, Columbia University and Lorena Barba, Assistant Professor, California Institute of Technology
Victoria Stodden, Assistant Professor, Department of Statistics, Columbia University and Lorena Barba, Assistant Professor, California Institute of Technology

Lorena Barba of George Washington University and a leading advocate of reproducible science said, “Conducting research reproducibly doesn’t mean someone else will reproduce the results, but that you are doing it as if someone would do this. By providing full documentation, access to input data and source code, the community will have confidence in your results and will label them as reproducible even if they are, in fact, not reproduced.”

Many other people added to the conversation including Mark Fahey of the National Institute of Computational Sciences. According to Fahey, the centers need to step up and take some responsibility for providing documentation about how users build and run their codes. Fahey said, “Centers can automatically collect information for each code built and each run of the code, and this information can be made available back to the researcher for publications if desired. There are already two prototypes (ALTD and Lariat) at a variety of computing centers around the world that collect a good portion of this information, and a new improved infrastructure is in development called XALT funded by NSF.”

Recommendations

At the outset of the workshop, the group committed to a key deliverable: recommendations in the form of priorities and initiatives for organizations and communities.

“It’s been implicit that ‘Of course, this is what people do, system administrators and researchers check to ensure that codes gets the same results after systems upgrades and when porting to new platforms’ but reproducibility has never been a formal enterprise,” said Nancy Wilkins-Diehr of the San Diego Supercomputer Center, who summarized the workshop and helped facilitate suggestions for moving forward.

“This is a good time to do this. Computational science is a respected contributor of the scientific knowledge base. Important decisions are now based on simulation. While this is gratifying, it has very real implications for our responsibilities as well,” she said.

The participants intend to move forward with humility, however. “The vision for the recommendations is to honor the reality of a diverse set of viewpoints and include ideas that might be outside of the box,” James concluded. Everyone agrees that there is a need to promote confidence-building tools and methodologies that do not adversely affect performance.

Recommendations will be ready in the September 2014 timeframe — please refer to xsede.org/reproducibility to read them. In addition, you can send comments and suggestions to [email protected]. The Help Desk will send any and all inquiries to the XSEDE team working on this initiative.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation technology (WSE-2), which its says packs twice the performance Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

Cerebras Doubles AI Performance with Second-Gen 7nm Wafer Scale Engine

April 20, 2021

Nearly two years since its massive 1.2 trillion transistor Wafer Scale Engine chip debuted at Hot Chips, Cerebras Systems is announcing its second-generation te Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire