Friday 17 August 2012

The Small Effect Size Effect - Why Do We Put Up With Small Effects?

Small effects sometimes matter - but psychology can do better
One of the things that bugs me about 'embodied' cognition research is that the effects, while statistically significant, tend to be small. What this means is that the groups were indeed different in the direction the authors claim, but only slightly, and that the authors had enough people showing the effect to make it come out on average. 

The problem with small effect sizes is that they mean all you've done is nudge the system. The embodied nervous system is exquisitely sensitive to variations in the flow of information it is interacting with, and it's not clear to me that merely nudging such a system is all that great an achievement. What's really impressive is when you properly break it - If you can alter the information in a task and simply make it so that the task becomes impossible for an organism, then you have found something that the system considers really important. The reverse is also true, of course - if you find the right way to present the information the system needs, then performance should become trivially easy. 

Psychology has become enthralled by statistical significance (to the point that we're possibly gaming the system in order to cross this magical marker). If your effect comes with a p value of less than .05, it is interesting, regardless of how small the effect is in terms of function. This is a problem, and we don't have to put up with it. If you ask a question about the right thing, you should get an unambiguous answer. If your answer is ambiguous, you may not be asking about the right thing. 

I want to remind readers of a couple of examples of nuisance small effects I've covered here before, then talk a little about some work which either broke or fixed the right thing, to highlight that we don't actually have to suffer from the tyranny of the small effect effect.