Interview: Randy McCarthy discusses his experiences with publishing his first Registered Report

October 15th, 2018, Randy McCarthy

18-Randy McCarthy-0809-DG-03.jpg

Registered Reports is a publishing format in which peer review occurs before results are known. This changes the equation of scientific publishing. Reviewers are focused on answering the questions: Is this work worth doing? If so, what is the best way to answer those questions? Instead of trying to "fix" work that has already been completed. Authors are rewarded for the parts of the scientific process that that they control, the importance and rigor of their work, and not on the parts that they do not have control over, the results.

Below, we asked Randy McCarthy from Northern Illinois University, the author of a recently published Registered Report in Comprehensive Results in Social Psychology, to describe his experience with the process and a bit about the research that led up to his recent article.

Tell us a little about your research background and your current research interests?

In graduate school, I worked in a social cognition lab. I primarily worked on a phenomenon known as spontaneous trait inferences. As I was finishing my dissertation I ended up getting a position at a research center that focused on family violence. This position was funded by contracts from the U.S. military to examine patterns of child maltreatment and partner abuse in military families. At the same time, this research center also had a few individuals who were using social-cognitive approaches to study child physical abuse. Pretty soon, I was collaborating with this group on a project examining the spontaneous trait inferences that parents make about children and how those inferences were moderated by parents’ risk for child physical abuse. It was a neat application of methods that I’d been using in graduate school to a more applied research question.

Now that I was working in this family violence research center, I began reading a bunch of social psychology research that was related to aggression. And I was closely following the conversations that were happening in our field about Open Science and the importance of replication studies. I conducted a few replications of a study that reported a so-called “heat-priming” effect: An effect where exposed to stimuli related to heat-related words subsequently increased the accessibility of aggression-related cognitions.

I bring up my experience with this “heat-priming” replication attempt for a few reasons. First, it is difficult to interpret what a “failed” replication means for the theory that was being tested. So I followed up my replication attempts of the “heat-priming” effect with my first Registered Report (that I talk about below). Second, this “heat-priming” study had very similar methods to Srull and Wyer (1979). I just had the privilege of being one of the lead authors on a Registered Replication Report of Srull and Wyer (1979).  I’ve since had a few good conversations about the Srull and Wyer (1979) RRR. My current research is taking some of the ideas from these conversations and really testing if there is a set of conditions under which we can reliably detect the hostile priming effect.

This was kinda fun to write out the past 7 years or so of my research life. Looking back, my trajectory looks much more linear than it felt. Who knows what I’ll be doing in 2025.

Registered Reports are a format that we advocate for strongly here at COS, can you tell us about the project that you submitted with this workflow?

As I mentioned above, I had previously tried to replicate a study demonstrating a “heat-priming” effect. The results from my replication attempts did not show the effect and were inconsistent with the results of the original study. Although I showed this operationalization probably didn’t reliably produce an effect, I still wasn’t completely satisfied with what this meant for the theory being tested. First, I was always concerned that I bungled something about the methods that affected my ability to detect the effect. I tried very hard to follow the methods of the original study as closely as possible, but it’s possible there was something about the way that I ran my replications that made it so I didn’t observe the effect. Second, the original methods didn’t strike me as a particularly good way to test the hypothesis. Collectively, it just seemed like the original hypothesis wasn’t interrogated very rigorously.  

I was doing a pilot test for a separate study. So at the end of this other pilot testing procedure, I decided to throw in a short task that I thought was a better test of the hypothesis that exposing individuals to heat-related words increased the accessibility of aggressive cognitions (without getting bogged down in the details, I will merely say that it was a highly-repeated, within-participants task where there was a brief delay between the “prime” and the measurement of the impact of the prime).

When I looked at the results of this new approach to studying the “heat-priming” effect I found ambiguous results. Specifically, heat-related primes seemed to increase the accessibility of aggressive cognitions relative to some of my comparison trials, but so did cold-related primes. The latter effect was not predicted. This was an interesting finding, but I wasn’t super confident in the effect. I mean, it was thrown onto the end of a separate study and it produced some unexpected results.

This was the beginning of my Registered Report. I used this initial study as pilot data in my Stage 1 submission. I basically said “There’s this ‘heat-priming’ effect that I’ve tried to study in the past. At the end of this longer and unrelated data collection procedure I tried a novel approach to studying it and got ambiguous results. Now I want to make the ‘heat-priming’ part a stand-alone study and I propose a close replication of my pilot study.”

What was your experience and what type of feedback did you receive when you sent in your proposed research plan?

The feedback was extremely helpful. There were some tweaks to the methods, proposed analyses, and ways to strengthen the introduction to the manuscript. A few specific changes were increasing the number of trials each participant completed, the wording of the awareness checks, and we added a possible moderator of the effect. One of the peer reviewers also introduced me to an R package to conduct power analyses for multi-level models. I can honestly say that it was the most collegial peer-review process I’ve ever been involved in and the review process made my proposal stronger.

After you received an in-principle acceptance from the journal, was anything different than normal when you were conducting the actual project?

Yes and no. The logistics and mechanics of collecting data was pretty much the same. However, my mindset changed when I was collecting data with the IPA in-hand. Historically, when collecting data I would be hoping to get results that would be “publishable.” You could get “lucky”, if you want to call it that, and get results that make it likely you will get a publication at the first journal you submit it to. However, I was always worried that the projects would be one of those where you get rejected at one journal, then a second journal, then a third journal, etc. It can be really frustrating because it takes a lot of time to get a manuscript to the point where it can even be submitted. And you have these sunk costs into projects that can linger on for years so you have a hard time letting it go.

With an IPA in-hand, my mind was merely focused on executing the methods that I proposed. It was a much healthier and less stressful mindset.

What happened in your project that didn’t go according to plan? Were there any changes from your expected plan that you had to discuss with the editor? How did you handle that with them?

Things went pretty smoothly, so there were just a few minor tweaks to the proposed project. First, I ended up getting a small grant to support my project, so I switched the way in which we compensated the participants. I merely emailed the editor to ask if this modification was OK. They said “yes and congrats on the small grant.” Second, although I reached my overall sample size goal, I had a few more participants than I expected who were ultimately omitted (for failing awareness checks, not completing all of the items, etc.).

The editors and peer-reviewers are researchers too. So they know that these sorts of minor things happen in any research project and were very understanding.

What advice would you give to someone who wants to try Registered Reports for the first time? What did you wish you knew before that you know now?

Be prepared to move more of the work to an earlier point in your research workflow. At least for me, a Registered Report involved more thinking and more writing at an earlier point in the research process. Also, although it really wasn’t an issue with my Registered Report, I would tell others that if they try a Registered Report, be comfortable with the idea that peer reviewers may try to change some aspects of your study. In the traditional peer-review model, authors try to convince reviewers that what you did was appropriate. In Registered Reports, authors try to convince reviewers that what you plan to do is appropriate.  

Would you recommend this to a colleague? For what type of work?

Yes. Registered Reports are merely a characteristic of the methods. When thought of this way, RRs are like other methodological characteristics such as double-blinding, manipulation checks, etc. RRs help us control for certain biases that may come up in the peer-review process and allow us to have solid evidence for which aspects of a study were planned and which were not. Registered Reports are applicable to all types of empirical work where you want to have something in the methods to control for those biases.  

As a heuristic for whether you would like to try Registered Reports, ask yourself “as a consumer of research, what would I want to see in other people’s research.” In general, I would like to read other people’s research that was a product of a Registered Reports process.

Anything else you’d like share?

I would say to try it if you are skeptical and make up your own mind. I understood the arguments for Registered Reports beforehand, but I was really sold after experiencing it. It’s hard to verbalize why, but I can tell you that the Registered Reports process just felt “right.”

Related reading: Why pre-registration might be better for your career and well-being

Recent Blogs

The Content of Open Science

What Second Graders Can Teach Us About Open Science

What's Going on With Reproducibility?

Open Science and the Marketplace of Ideas

3 Things Societies Can Do to Promote Research Integrity

How to Manage and Share Your Open Data

Interview with Prereg Challenge Award Winner Dr. Allison Skinner

Next Steps for Promoting Transparency in Science

Public Goods Infrastructure for Preprints and Innovation in Scholarly Communication

A How-To Guide to Improving the Clarity and Continuity of Your Preregistration

Building a Central Service for Preprints

Three More Reasons to Take the Preregistration Challenge

The Center for Open Science is a Culture Change Technology Company

Preregistration: A Plan, Not a Prison

How can we improve diversity and inclusion in the open science movement?

OSF Fedora Integration, Aussie style!

Replicating a challenging study: it's all about sharing the details.

Some Examples of Publishing the Research That Actually Happened

How Preregistration Helped Improve Our Research: An Interview with Preregistration Challenge Awardees

Are reproducibility and open science starting to matter in tenure and promotion review?

The IRIS Replication Award and Collaboration in the Second Language Research Community

We Should Redefine Statistical Significance

Some Cool New OSF Features

How Open Source Research Tools Can Help Institutions Keep it Simple

OSF Add-ons Help You Maximize Research Data Storage and Accessibility

10 Tips for Making a Great Preregistration

Community-Driven Science: An Interview With EarthArXiv Founders Chris Jackson, Tom Narock and Bruce Caron

A Preregistration Coaching Network

Why are we working so hard to open up science? A personal story.

One Preregistration to Rule Them All?

Using the wiki just got better.

Transparent Definitions and Community Signals: Growth in the Open Science Community

We're Committed to GDPR. Here's How.

Preprints: The What, The Why, The How.

The Prereg Challenge Is Ending. What's Next?

We are Now Registering Preprint DOIs with Crossref

Using OSF in the Lab

Psychology's New Normal

How Open Commenting on Preprints Can Increase Scientific Transparency: An Interview With the Directors of PsyArxiv, SocArxiv, and Marxiv

The Landscape of Open Data Policies

Open Science is a Behavior.

Why pre-registration might be better for your career and well-being

Interview: Randy McCarthy discusses his experiences with publishing his first Registered Report

Towards minimal reporting standards for life scientists

Looking Back on the Prereg Challenge and Forward To More Credible Research

OSF: Origin, growth, and what’s next

A Critique of the Many Labs Projects

The Rise of Open Science in Psychology, A Preliminary Report

Strategy for Culture Change

New OSF Registries Enhancements Improve Efficiency and Quality of Registrations

Registered Reports and PhD’s – What? Why? How? An Interview with Chris Chambers

How to Collaborate with Industry Using Open Science

How to Build an Open Science Network in Your Community

Seven Reasons to Work Reproducibly

COS Collaborates with Case Medical Research to Support Free Video Publishing for Clinicians and Researchers

Advocating for Policy Improvements at Your Institution

Announcing a joint effort to improve research transparency: FAIRSharing and TOP Guidelines

OSF as a tool for managing course-embedded research projects

Journals test the Materials Design Analysis Reporting (MDAR) checklist

Now you can endorse papers on OSF Preprints with Plaudit

Many Labs 4: Failure to Replicate Mortality Salience Effect With and Without Original Author Involvement

Approach and vision for the OSF Preprint infrastructure

Conflict between Open Access and Open Science: APCs are a key part of the problem, preprints are a key part of the solution

Re-engineering Ethics Training: An Interview with Dena Plemmons and Erica Baranski

Answering Your Preregistration Questions

UBC leads the way as first Canadian institutional OSF member

2019 Recap: OSF Growth and Open Knowledge Exchange

Getting Started with OSF

This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.