SCORE

Systematizing Confidence in Open Research and Evidence

Assessing the credibility of research claims is a central and continuous part of the scientific process. However, current assessment strategies often require substantial time and effort. To accelerate research progress, the Center for Open Science (COS) partnered with the Defense Advanced Research Projects Agency's (DARPA) program Systematizing Confidence in Open Research and Evidence (SCORE) in 2019 on work towards developing and deploying automated tools that provide rapid, scalable, and accurate confidence scores for research claims.

Read the Preprint

Since then, COS has completed extraction of scientific claims from a stratified sample of social-behavioral science papers. In total 7,066 claims were extracted manually, enabling confidence scores to be assigned by human forecasters and algorithms. Concurrently, COS worked with hundreds of researchers to conduct replications and reproductions on a subset of these extracted claims. The team leveraged the OSF for this large-scale collaboration so that materials from the replication and reproduction efforts can be made openly available.

Data collection for this project has now concluded, and COS is working with close collaborators to compile and enhance the project’s data; continue coding core project outcomes; and draft reports on what we learned about replicability, reproducibility, and robustness, as well as the potential for experts, forecasters, and algorithms to contribute scalable tools for research assessment.

These reports should be publicly available in 2025. Please contact Tim Errington (tim@cos.io) or Andrew Tyner (andrewtyner@cos.io) for more information.