Registered Reports is a publishing format in which peer review occurs before results are known. This changes the equation of scientific publishing. Reviewers are focused on answering the questions: Is this work worth doing? If so, what is the best way to answer those questions? Instead of trying to "fix" work that has already been completed. Authors are rewarded for the parts of the scientific process that that they control, the importance and rigor of their work, and not on the parts that they do not have control over, the results.
Below, we asked Randy McCarthy from Northern Illinois University, the author of a recently published Registered Report in Comprehensive Results in Social Psychology, to describe his experience with the process and a bit about the research that led up to his recent article.
Tell us a little about your research background and your current research interests?
In graduate school, I worked in a social cognition lab. I primarily worked on a phenomenon known as spontaneous trait inferences. As I was finishing my dissertation I ended up getting a position at a research center that focused on family violence. This position was funded by contracts from the U.S. military to examine patterns of child maltreatment and partner abuse in military families. At the same time, this research center also had a few individuals who were using social-cognitive approaches to study child physical abuse. Pretty soon, I was collaborating with this group on a project examining the spontaneous trait inferences that parents make about children and how those inferences were moderated by parents’ risk for child physical abuse. It was a neat application of methods that I’d been using in graduate school to a more applied research question.
Now that I was working in this family violence research center, I began reading a bunch of social psychology research that was related to aggression. And I was closely following the conversations that were happening in our field about Open Science and the importance of replication studies. I conducted a few replications of a study that reported a so-called “heat-priming” effect: An effect where exposed to stimuli related to heat-related words subsequently increased the accessibility of aggression-related cognitions.
I bring up my experience with this “heat-priming” replication attempt for a few reasons. First, it is difficult to interpret what a “failed” replication means for the theory that was being tested. So I followed up my replication attempts of the “heat-priming” effect with my first Registered Report (that I talk about below). Second, this “heat-priming” study had very similar methods to Srull and Wyer (1979). I just had the privilege of being one of the lead authors on a Registered Replication Report of Srull and Wyer (1979). I’ve since had a few good conversations about the Srull and Wyer (1979) RRR. My current research is taking some of the ideas from these conversations and really testing if there is a set of conditions under which we can reliably detect the hostile priming effect.
This was kinda fun to write out the past 7 years or so of my research life. Looking back, my trajectory looks much more linear than it felt. Who knows what I’ll be doing in 2025.
Registered Reports are a format that we advocate for strongly here at COS, can you tell us about the project that you submitted with this workflow?
As I mentioned above, I had previously tried to replicate a study demonstrating a “heat-priming” effect. The results from my replication attempts did not show the effect and were inconsistent with the results of the original study. Although I showed this operationalization probably didn’t reliably produce an effect, I still wasn’t completely satisfied with what this meant for the theory being tested. First, I was always concerned that I bungled something about the methods that affected my ability to detect the effect. I tried very hard to follow the methods of the original study as closely as possible, but it’s possible there was something about the way that I ran my replications that made it so I didn’t observe the effect. Second, the original methods didn’t strike me as a particularly good way to test the hypothesis. Collectively, it just seemed like the original hypothesis wasn’t interrogated very rigorously.
I was doing a pilot test for a separate study. So at the end of this other pilot testing procedure, I decided to throw in a short task that I thought was a better test of the hypothesis that exposing individuals to heat-related words increased the accessibility of aggressive cognitions (without getting bogged down in the details, I will merely say that it was a highly-repeated, within-participants task where there was a brief delay between the “prime” and the measurement of the impact of the prime).
When I looked at the results of this new approach to studying the “heat-priming” effect I found ambiguous results. Specifically, heat-related primes seemed to increase the accessibility of aggressive cognitions relative to some of my comparison trials, but so did cold-related primes. The latter effect was not predicted. This was an interesting finding, but I wasn’t super confident in the effect. I mean, it was thrown onto the end of a separate study and it produced some unexpected results.
This was the beginning of my Registered Report. I used this initial study as pilot data in my Stage 1 submission. I basically said “There’s this ‘heat-priming’ effect that I’ve tried to study in the past. At the end of this longer and unrelated data collection procedure I tried a novel approach to studying it and got ambiguous results. Now I want to make the ‘heat-priming’ part a stand-alone study and I propose a close replication of my pilot study.”
What was your experience and what type of feedback did you receive when you sent in your proposed research plan?
The feedback was extremely helpful. There were some tweaks to the methods, proposed analyses, and ways to strengthen the introduction to the manuscript. A few specific changes were increasing the number of trials each participant completed, the wording of the awareness checks, and we added a possible moderator of the effect. One of the peer reviewers also introduced me to an R package to conduct power analyses for multi-level models. I can honestly say that it was the most collegial peer-review process I’ve ever been involved in and the review process made my proposal stronger.
After you received an in-principle acceptance from the journal, was anything different than normal when you were conducting the actual project?
Yes and no. The logistics and mechanics of collecting data was pretty much the same. However, my mindset changed when I was collecting data with the IPA in-hand. Historically, when collecting data I would be hoping to get results that would be “publishable.” You could get “lucky”, if you want to call it that, and get results that make it likely you will get a publication at the first journal you submit it to. However, I was always worried that the projects would be one of those where you get rejected at one journal, then a second journal, then a third journal, etc. It can be really frustrating because it takes a lot of time to get a manuscript to the point where it can even be submitted. And you have these sunk costs into projects that can linger on for years so you have a hard time letting it go.
With an IPA in-hand, my mind was merely focused on executing the methods that I proposed. It was a much healthier and less stressful mindset.
What happened in your project that didn’t go according to plan? Were there any changes from your expected plan that you had to discuss with the editor? How did you handle that with them?
Things went pretty smoothly, so there were just a few minor tweaks to the proposed project. First, I ended up getting a small grant to support my project, so I switched the way in which we compensated the participants. I merely emailed the editor to ask if this modification was OK. They said “yes and congrats on the small grant.” Second, although I reached my overall sample size goal, I had a few more participants than I expected who were ultimately omitted (for failing awareness checks, not completing all of the items, etc.).
The editors and peer-reviewers are researchers too. So they know that these sorts of minor things happen in any research project and were very understanding.
What advice would you give to someone who wants to try Registered Reports for the first time? What did you wish you knew before that you know now?
Be prepared to move more of the work to an earlier point in your research workflow. At least for me, a Registered Report involved more thinking and more writing at an earlier point in the research process. Also, although it really wasn’t an issue with my Registered Report, I would tell others that if they try a Registered Report, be comfortable with the idea that peer reviewers may try to change some aspects of your study. In the traditional peer-review model, authors try to convince reviewers that what you did was appropriate. In Registered Reports, authors try to convince reviewers that what you plan to do is appropriate.
Would you recommend this to a colleague? For what type of work?
Yes. Registered Reports are merely a characteristic of the methods. When thought of this way, RRs are like other methodological characteristics such as double-blinding, manipulation checks, etc. RRs help us control for certain biases that may come up in the peer-review process and allow us to have solid evidence for which aspects of a study were planned and which were not. Registered Reports are applicable to all types of empirical work where you want to have something in the methods to control for those biases.
As a heuristic for whether you would like to try Registered Reports, ask yourself “as a consumer of research, what would I want to see in other people’s research.” In general, I would like to read other people’s research that was a product of a Registered Reports process.
Anything else you’d like share?
I would say to try it if you are skeptical and make up your own mind. I understood the arguments for Registered Reports beforehand, but I was really sold after experiencing it. It’s hard to verbalize why, but I can tell you that the Registered Reports process just felt “right.”
Related reading: Why pre-registration might be better for your career and well-being
210 Ridge McIntire Road
Suite 500
Charlottesville, VA 22903-5083
Email: contact@cos.io
Unless otherwise noted, this site is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Responsible stewards of your support
COS has earned top recognition from Charity Navigator and Candid (formerly GuideStar) for our financial transparency and accountability to our mission. COS and the OSF were also awarded SOC2 accreditation in 2023 after an independent assessment of our security and procedures by the American Institute of CPAs (AICPA).
We invite all of our sponsors, partners, and members of the community to learn more about how our organization operates, our impact, our financial performance, and our nonprofit status.