Center for Open Science Launches Challenge to Advance Automated Assessment of Research Claims

Mar. 5, 2025

Charlottesville, VA — The Center for Open Science (COS) has announced the launch of an innovative challenge aimed at advancing automated methods for evaluating research credibility.

Assessing the validity and trustworthiness of research claims is a central, ongoing, and labor-intensive part of the scientific process. Supported by the Robert Wood Johnson Foundation, the Predicting Replicability Challenge seeks to accelerate the development of methods that could dramatically reduce the time and resources needed to assess research credibility.

The challenge invites teams to develop algorithmic approaches that predict the likelihood of research claims being successfully replicated. Participants will have access to training data drawn from the Framework for Open and Reproducible Research Training (FORRT) database that documents 3,000+ replication effects. New research claims will then be used to test the algorithmic approaches’ ability to predict replication outcomes.

"The Predicting Replicability Challenge is a pivotal step toward scaling our capacity to evaluate research claims efficiently and systematically," said Tim Errington, Senior Director of Research at COS. "By uniting expertise across disciplines, we aim to develop innovative tools that complement human judgment in assessing research credibility. This initiative could fundamentally transform how evaluation is conducted, enabling assessments of research claims at a scale that was previously unimaginable."

The initiative encourages innovation and interdisciplinary collaboration, including partnerships between AI/ML experts and domain specialists in social-behavioral sciences. The challenge is open to participants from academic and non-academic spaces worldwide.

“The FORRT community is honored to see its Replication Hub and downloadable replication data via our R-package contribute to the Predicting Replicability Challenge, reflecting our commitment to advancing the credibility and reproducibility of scientific research,” said team members Flavio Azevedo, Lukas Wallrich, and Lukas Roseler.

“By providing access to the largest databases of replication studies in social sciences, with the support of COS, FORRT hopes to inspire innovative solutions that integrate human expertise with machine learning to scale the evaluation of research claims,” they continued. “This collaboration represents a meaningful step toward fostering trust and integrity in science, and we are excited to see the impact it will have on the broader research community.”

To participate, teams must submit a statement of interest at any time up to August 18, 2025. The first evaluation round begins in August 2025, with subsequent rounds to follow through March 2026. Cash prizes ranging from $3,375 to $15,000 will be awarded to the top-performing teams in each round. Further details, including the timeline and eligibility criteria, are available on the challenge website.

The Predicting Replicability Challenge directly aligns with COS's mission to increase the openness, integrity, and reproducibility of scientific research. The resulting automated methods have the potential to revolutionize how researchers, reviewers, funders, and policymakers evaluate scientific claims, ultimately fostering greater public confidence in scientific research processes and findings.

###

About the Center for Open Science
Founded in 2013, the Center for Open Science (COS) is a nonprofit culture change organization with a mission to increase openness, integrity, and reproducibility of scientific research. COS pursues this mission by building communities around open science practices, supporting metascience research, and developing and maintaining free, open source software tools, including the Open Science Framework (OSF).

Media Contact: pr@cos.io

Recent News