The Changing Landscape of Psychological Research

Arden University’s Psychology Lecturer, James Bartlett, takes a deep-dive into the changing landscape of psychological research.
 

Changing landscape of psychological research

The last decade has not been kind to psychology. There have been falls from grace by influential researchers such as Diederik Stapel, who fabricated all of his data, and there was a ‘replication crisis’, where the results of many high-profile studies have failed to replicate.

In response to these events, the way psychological research is performed and reported has changed. Therefore, as students and scholars of psychology, it’s important you are aware of the current developments in psychological research.

The biggest challenge psychology has faced is the replication crisis; the realisation that many results could not be repeated by another researcher (Nelson, Simmons, & Simonsohn, 2018). Psychology journals also prefer to publish novel research that shows an effect of some sort, such as a difference between groups or a relationship between variables. It is much harder to publish research that simply replicates a previous article, or reports a null effect of no difference or relationship. This is known as publication bias and it has a particularly damaging effect on people’s perceptions of psychological research.

One of the best examples of publication bias stems from an article by Daryl Bem purporting to show evidence of precognition, or the ability to see into the future (Bem, 2011). Although many people were sceptical of the findings, it was published in one of the most prestigious psychology journals and it appeared to follow all the tried and tested methods. However, when a group of researchers attempted - and failed - to replicate the study (Ritchie, Wiseman, & French, 2012), the journal that originally published Bem’s article rejected it on the grounds that it “does not publish replications” (French, 2012).

A few years later, a giant team of researchers tried to replicate 100 articles published in high-profile psychology journals. The headline finding was that only 39 (39%) of studies replicated the original finding (Open Science Collaboration, 2015). A more recent attempt at replicating a series of articles found similarly subpar results, with 14 of 28 (50%) findings being replicated (Klein et al., 2018). 

Although there is not consistent agreement over the extent of the problem, with some researchers suggesting we do not have anything to worry about, this period of reflection has forced psychology to look at its publishing and reporting practices. The result of this is that there is now genuine optimism, as contemporary research is being produced with greater rigour across psychology.

To target these problems, there is a growing culture of ‘open science’. This means that, instead of research being closed off, the research process is more transparent. Starting in the design phase, authors are pre-registering the design of their study and how they plan on analysing the data (van‘t Veer & Giner-Sorolla, 2016). This requires writing a document that outlines all of these plans, and archiving it with a time stamp which can be cited in the final article.

An extension of this process which is gaining momentum is publishing a registered report, where pre-registration is combined with the publishing process. Authors submit their rationale and methods to a journal, known as a stage one submission. If this submission is satisfactory, the journal will offer in-principle acceptance (Chambers, 2017). Registered reports help with publication bias, as the final results do not impact the journal’s decision to publish the research. 

The next step in making research more transparent is opening access to the data and the final article. If you read the fine print when you submit an article, many journals require you to make the data available for other researchers to confirm your findings. However, this is rarely followed in practice (Vanpaemel, Vermorgen, Deriemaecker, & Storms, 2015). To encourage better adoption of this, some journals are offering incentives such as providing articles who deposit their data with badges to identify those who follow through with it. Think of these as scout badges to demonstrate good behaviour.

It is also important to provide more people with access to the final article. Journal subscriptions are incredibly expensive for university libraries, and the general public are not going to pay to access a journal article. Therefore, one initiative is to make more articles open access. This can be done via the journal, but it usually requires you to pay an expensive article processing charge. Alternatively, you can make your own research open access, by depositing your author version of the manuscript on websites such as PsyArXivor ResearchGate. Most journals allow you to share your own version of the manuscript, but not the final version included in the journal. You can check whether a journal allows you to do this using the Sherpa/Romeo database.

Beyond the influence of individual researchers or small groups, one of the most interesting developments in psychology is the rise of projects containing multiple research groups. The majority of psychology research can be criticised for having too few participants, and focusing predominantly on WEIRD (white, educated, industrialised, rich and democratic) participants. This means the majority of articles include a modest sample of American or European university students.

This is usually rationalised as they are the easiest population to sample, and it would require more resources than individual research groups can handle to go out and scour the earth for more diverse samples. The most ambitious project to try and tackle this problem is the Psychological Science Accelerator. This is a network of hundreds of research groups in over 70 countries. Rather than the burden of collecting larger more diverse samples falling on one research group, many groups from different countries collect data for the same project and pool it together. This is a fascinating project that can revolutionise how psychological research is performed and represents a potential shift in the ecosystem of science, as informative team research may become valued over individual success.

Now that you are aware of some of the issues, you might want to incorporate some of these practices into your own research. A colleague and I wrote an article aimed at postgraduate students, outlining how you can make your research more transparent with limited time on your hands (Bartlett and Eaves, 2019).

The article includes further references you can look to for advice on open science practices. The main piece of advice is not biting off more than you can chew. There are a lot of new developments out there, and you can feel tempted to include everything at once. However, since you are on a learning curve, it is better to gradually improve the level of transparency to stop it becoming overwhelming and counterproductive. Even if you do not include any of these developments in your own research, you should now be more aware of how research is changing in the wider landscape of psychology.
 


For a full list of references for this article, please contact James on jbartlett@arden.ac.uk.

This article was taken from the Arden Psychology Newsletter. If you’re an Arden student or member of staff, and would be interested in writing a topical piece for the newsletter, please contact Holly Stokes on hstokes@arden.ac.uk.