Can machines determine the credibility of research claims? The Center for Open Science joins a new DARPA program to find out.

Feb. 5, 2019

The Center for Open Science (COS) has been selected to participate in DARPA’s new program Systematizing Confidence in Open Research and Evidence (SCORE) with a 3-year cooperative agreement potentially totaling more than $7.6 million.  This program represents an investment by DARPA to assess and improve the credibility of social and behavioral science research.

DARPA identifies the purpose of SCORE is “to develop and deploy automated tools to assign ‘confidence scores’ to different SBS research results and claims. Confidence scores are quantitative measures that should enable a DoD consumer of SBS research to understand the degree to which a particular claim or result is likely to be reproducible or replicable.”  If successful, consumers of scientific evidence–researchers, funders, policymakers, etc.–would have readily available information about the uncertainty associated with that evidence.

“Rapid assessments of research credibility could help inform researchers’ decisions about what to investigate, funders’ decisions about what to fund, and policymakers’ decisions about how to apply the research” said Tim Errington, Director of Research at COS.  “We are very excited to be part of this new program.”

To achieve this aspirational objective, the SCORE program will produce four primary artifacts:  

  1. A massive database of approximately 30,000 claims from published papers in the social-behavioral sciences. The database will be enhanced with evidence about those claims that is automatically and manually extracted from the paper itself and merged into the database from other sources, such as how often the work has been cited and whether the data are openly accessible or the research was preregistered.  
  2. Experts will review and score about 3,000 of those claims in surveys, panels, or prediction markets for their likelihood of being reproducible findings.
  3. Teams will use the database of information about the claims to generate algorithms (artificial intelligence) to score the same claims as the experts.
  4. Hundreds of researchers will conduct replications on a sample of the claims to test the experts’ and algorithms’ ability to predict reproducibility.  

The COS team, in collaboration with partners from the University of Pennsylvania and Syracuse University, is responsible for creating the database.  Other teams will recruit experts and create the algorithms to evaluate the claims. And, COS will coordinate a massive collaboration of researchers from every area of the social and behavioral sciences to conduct replication and reproducibility studies.  

“DARPA’s investment signals the onset of the next phase of the reformation that is underway in the social and behavioral sciences,” said Brian Nosek, Executive Director of COS and Principal Investigator of the COS team. “For the last eight years, the research community has been scrutinizing the reproducibility of its findings and the quality of its research practices.  Now, that learning is being translated into opportunities to improve research practices to accelerate the pace of discovery.”

The goal to conduct hundreds of replications is daunting, but COS has gained substantial experience coordinating large-scale replication projects with contributions of hundreds of members of the research community.  This program will extend those efforts to conduct replications across a variety of disciplines.

“The success of this program relies on the enthusiastic and capable contributions of many researchers working toward a common goal” said Chris Chartier, Associate Professor at Ashland University, and lead of the sourcing team for conducting replications. “There is such tremendous community spirit for improving research practices.”

Coordination of such a large-scale collaboration will be aided by the Open Science Framework (https://osf.io/) an open-source collaborative management tool that facilitates rigorous, reproducible research that is maintained by COS.  “OSF is essential for us to be able to manage such a complex program with so many contributors, projects, and associated data,” noted Beatrix Arendt, Program Manager at COS.  “We are committed to transparency of process and outcomes so that we are accountable to the research community to do the best job that we can, and so that all of our work can be scrutinized and reproduced for future research that will build on this work.”

Like any high-risk research program, it is unknown whether this program will be able to achieve its ambitious aims.  However, “the time is right to tackle this problem head-on” said Nosek. “The only way that this program can fail to provide transformative insight into research credibility is if we fail to execute on building a useful database and conducting high-quality replications. Whatever the outcome, we will learn a ton about the state of science and how we can improve.”

Researchers interested in potentially joining this program to conduct replication or reproduction studies are encouraged to review the Call for Collaborators.


About Center for Open Science

The Center for Open Science (COS) is a non-profit technology and culture change organization founded in 2013 with a mission to increase openness, integrity, and reproducibility of scientific research. COS pursues this mission by building communities around open science practices, supporting metascience research, and developing and maintaining free, open source software tools. The OSF is a web application that provides a solution for the challenges facing researchers who want to pursue open science practices, including: a streamlined ability to manage their work; collaborate with others; discover and be discovered; preregister their studies; and make their code, materials, and data openly accessible. Learn more at cos.io and osf.io.


Contacts:

 

Recent News