I think Daryl Bem has done psychology an enormous favour. Possibly even two.
As you probably know, Bem is the author of 'Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect', a paper with claims to have found evidence for precognition by running standard psychological experiments in reverse and demonstrating small but statistically reliable effects on behaviour of stimuli which came after the response was made. I posted briefly about it here, and otherwise it's been all over all of the internet for months.
It's back on my radar because several psychologists, including Richard Wiseman, recently submitted a failure to replicate the studies to the Journal of Personality & Social Psychology (JPSP), which is where Bem published his work. As reported here, Eliot Smith, the editor, refused to even send this (and another, successful replication as well) out for review. The reason Smith gives is that JPSP is not in the business of publishing mere replications - it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new - flagship journals like JPSP all have policies in place like this. But it's not a good look, and it got me thinking.
The Low Bar
Bem's first favour is, I think, to have exposed the extraordinarily low bar for publication in psychology. With no coherent central theory, JPSP was able to justify publishing Bem's paper because he followed the rules:
- The experiments were appropriate to the question.
- The experimental tasks he adapted were all standard reference points in the literature
- The analyses were all standard - Bem ran the correct General Linear Model statistics on his reaction time data
- The paper was meticulous in it's detail - the studies are fully replicable and reported in clear, straight-forward language.
In fact, I think JPSP almost had no choice but to publish the paper, because the only reason it deserved to fail peer review was because it was a ridiculous question. But the reasons that make it ridiculous (the literal, physical impossibility of what Bem was proposing) live outside the general bounds of psychology and we simply aren't sufficiently well connected to either physics, chemistry or biology in our theorising for that information to be allowed into the decision making process. As evidence, let me quote the editorial which accompanied Bem's article:
We openly admit that the reported findings conflict with our own beliefs about causality and that we find them extremely puzzling. Yet, as editors we were guided by the conviction that this paper—as strange as the findings may be—should be evaluated just as any other manuscript on the basis of rigorous peer review. Our obligation as journal editors is not to endorse particular hypotheses but to advance and stimulate science through a rigorous review process. [italics added]
Think about that: the data conflict with what the editors of a psychology journal believe about causality. They couldn't bring themselves to refer to all the empirical data physics has about causality. They couldn't even rule the paper out based on Bem's handwaving towards quantum mechanics in his Discussion section, which is almost certainly incoherent gibberish that appeals to vague parallels between psi claims and quantum weirdness and isn't even close to a plausible mechanism. JPSP didn't require Bem to provide a mechanism for his outrageous claim, but they were still able to publish the paper.
This is, of course, a ludicrous place for psychology to find itself. Psychology needs to align it's explanations with the natural sciences (at least with biology, if not physics) and in the meantime, we need to stop thinking we know what we're talking about when we witter about quantum mechanics.
Replication is key in science. Findings become robust and reliable only once they have survived various attempts to break them. Bem's second favour is to expose the well known secret that major journals simply won't publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don't get published, further reducing the incentive to run these studies next time. The field is left with a series of "exciting" results dangling in mid-air, connected only to other studies run in the same lab.
JPSP they think they've done their bit to be 'fair and balanced' by publishing the Wagenmakers et al (2011) commentary on the problems with GLM analyses with Bem's paper. But think about the message JPSP is sending to authors. That message is 'we will publish your crazy story if it's new, but not your sensible story if it's merely a replication'. A journal that couldn't find a reason to bounce Bem on sight has no right bouncing better science on this basis.
So thanks, Professor Bem. Your ridiculous research and the Type I errors you managed to find have, at least, gotten everyone talking about some very real problems in scientific publishing, and you've opened a discussion for psychologists about how we should be analysing our data. Bet you didn't see that coming.
Bem, D. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100 (3), 407-425 DOI: 10.1037/a0021524
Wagenmakers, EJ, Wetzels, R, Borsboom, D, & van der Maas, HL (2011). Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100(3), 426-432 DOI: 10.1037/a0022790