Wednesday, 22 October 2014

Do people really not know what running looks like?

Faster, higher, stronger -
wobbly!
When we run, our arms and legs swing in an alternating rhythm. Your left arm swings back as your left leg swings forward, same with the right. This contralateral rhythm is important for balance; the arms and legs counterbalance each other and help reduce rotation of the torso created by swinging the limbs. 

It turns out, however, that people don't really know this and they draw running incorrectly surprisingly often. Specifically, they often depict people running in a homolateral gait (with arms and legs on the same side swinging in the same direction at the same time; see the Olympics poster). I commented on a piece by Rose Eveleth at the Atlantic about a paper (Meltzoff, 2014) that identifies this surprising confusion in art throughout history and all over the world, and that then reports some simple studies showing that people really don't know what running is supposed to look like.


Rose covered the topic well; I wanted here to critique the paper a little because it's a nice example of some flawed cognitive psychology style thinking. That said, I want to say that I did like this paper. It's that rare thing - a paper by a single author who just happened to notice something and think about it a little then report what he found in case anyone else thought it was cool too. This is a bit old school and I approve entirely.

Tuesday, 14 October 2014

Your hand is not a perceptual ruler

Visual perception has a problem; it doesn't come with a ruler. 

Visual information is angular, and the main consequence of this is that the apparent size of something varies with how far away it is. This means you can't tell how big something actually is without more information. For example, the Sun and the Moon are radically different actual sizes, but because of the huge difference in how far away they are, they are almost exactly the same angular size; this is why solar eclipses work. (Iain Banks suggested in 'Transition' that solar eclipses on Earth would be a great time to look for aliens among us, because it's a staggering coincidence that they work out and they would make for great tourism :) 

This lack of absolute size information is a problem because we need to know how big things actually are in order to interact with them. When I reach to grasp my coffee cup, I need to open my hand up enough so that I don't hit it and knock it over. Now, I can actually do this; as my reach unfolds over time, my hand opens to a maximum aperture that's wide enough to go round the object I'm reaching for (e.g. Mon-Williams & Bingham, 2011). The system therefore does have access to some additional information it can use to convert the angular size to a metric size; this process is called calibration and people who study calibration are interested in what that extra information is.


The ecological approach to calibration (see anything on the topic by Geoff Bingham) doesn't treat this as a process of 'detect angular size, detect distance, combine and scale', of course. Calibration uses some information to tune up the perception of other information so that the latter is detected in the calibrated unit. The unit chosen will be task specific because calibration needs information and tasks only offer information about themselves. A commonly discussed unit (used for scaling the perception of long distances) is eye height, because there is information in the optic array for it and it provides a fairly functional ruler for measuring distances out beyond reach space. 


Linkenauger et al (2014) take a slightly different approach. They suggest that what the system needs is something it carries with it and that remains constant (not just constantly specified, as with eye height). They present some evidence that the dominant hand is perceived to be a fairly constant length when magnified, and suggest that this length is stored and used by the system to calibrate size perception in reach space. There are, let's say, a few problems with this paper. 



Wednesday, 8 October 2014

Limits on action priming by pictures of objects

If I show you a picture of an object with a handle and ask you to make a judgment about that object (say, whether it's right side up or not) you will be faster to respond if you use the hand closest to the handle. This is called action priming (Tucker & Ellis. 1998) and there is now a wide literature using this basic setup to investigate how the perception of affordances prepares the action system to do one thing rather than another.

There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.

For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.