There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.
For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.
The first experiment asked participants to view pictures of tools with a handle on one side or the other and also pictures of natural objects (fruits, animals; all symmetrical). The task was to press the left button if the object was manmade, and the right button if it was not. Buttons were either keys on the keyboard, buttons on the left or right of the monitor, or buttons on the lower part of the front of the monitor. In the last condition, participants had to reach with their dominant hand to press the button; in the first two, the participants rested their fingers on the buttons prior to the start of the trial.
Each trial presented an object, either a tool or a natural object, in one of two orientations (normal or mirror reversed along the vertical axis). This meant that for tools, the response 'manmade' using the left button was either compatible or incompatible with the side the handle was on. This should lead to action priming, which should get stronger as the task went from key presses to button presses on the side to reaching to press a button.
What they found was no action priming at all (Figure 1). The only significant effects were three negative compatibility effects, where the compatible response location was slower (see Wilson, Tresilian & Schlaghecken, 2010 for a discussion of the NCE).
For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.
The first experiment asked participants to view pictures of tools with a handle on one side or the other and also pictures of natural objects (fruits, animals; all symmetrical). The task was to press the left button if the object was manmade, and the right button if it was not. Buttons were either keys on the keyboard, buttons on the left or right of the monitor, or buttons on the lower part of the front of the monitor. In the last condition, participants had to reach with their dominant hand to press the button; in the first two, the participants rested their fingers on the buttons prior to the start of the trial.
Each trial presented an object, either a tool or a natural object, in one of two orientations (normal or mirror reversed along the vertical axis). This meant that for tools, the response 'manmade' using the left button was either compatible or incompatible with the side the handle was on. This should lead to action priming, which should get stronger as the task went from key presses to button presses on the side to reaching to press a button.
What they found was no action priming at all (Figure 1). The only significant effects were three negative compatibility effects, where the compatible response location was slower (see Wilson, Tresilian & Schlaghecken, 2010 for a discussion of the NCE).
Experiments 2 and 3 then proceeded to make the methods closer and closer to the Tucker & Ellis (1998) paper they were trying to replicate.
- Experiment 2 changed task to a judgement of 'upright' vs 'inverted' and just used the tool pictures stimuli.Participants had to reach to respond but this time used the left hand to reach to the left button and the right hand to the right button, and the pictures either stayed on the screen during the reach or were turned off when the reach began. Again they found no action priming; in fact, no differences anywhere as a function of handle location/response location compatibility.
- Experiment 3 adapted Experiment 2. The response was now using two buttons on which participants rested their fingers, and the stimulus set was reduced in size. Because Tucker & Ellis (1998) told their participants that upside down vs right side up was defined 'with regard to the object's normal use' Yu et al instructed their participants in one condition to imagine picking up the object while making the judgment. This was the only condition in the entire paper that found significant action priming, and it was on the same scale as observed in Tucker & Ellis (1998).
The authors conclude that pictures of objects do not automatically prime using that object, but can do so if people are explicitly planning to use that object.
Thoughts on this research
This paper contains a lot of examples of the kinds of confusions common to this literature:
What might a mental set be?
Wilson, A. D., Tresilian, J. R., & Schlaghecken, F. (2010). Continuous priming effects on discrete response choices. Brain and Cognition, 74(2), 152-159. Download
Thoughts on this research
This paper contains a lot of examples of the kinds of confusions common to this literature:
- the text alternates between talking about the effects of 'tools' vs 'pictures of tools', even though the stimuli are always pictures. This muddles the story and makes it seem like they are studying the perception of actual objects
- the button press is always about judging some non-action relevant property of the object in the picture. This is pitched as a strength: we're looking for the response primed by the affordance to interfere with this other task but critically, the required response never actually matches how a person would effect the affordance of the object in the picture. It's always button pressing, never reaching-to-grasp-and-lift. This paper tried a little by having conditions where people had to reach, but it was always a reach-to-press. Why would these things interfere with each other?
- similarly, the responses are always described as 'compatible' or 'incompatible' with the primed response. This is generous, to say the least, for the reason in the previous point.
These confusions come from the standard cognitive task analysis that underlies all this work. The standard story is that a picture of an object is a lot like the retinal image of an object and so it will trigger the same representations of that object. That representation will contain information about the typical ways we use the object and so activating this representation will activate these action plans, which will interfere with the action you actually have to undertake (the button press to answer the judgment question). This paper's result was interpreted as showing that interference only occurs when the person is in the right 'mental set', namely thinking about using the object.
What might a mental set be?
Pictures of objects have a dual nature, informationally speaking. They present information about the fact they are a picture (i.e. a flat surface) and they present information about the thing depicted. We can interact with all this information depending on what we are up to; if you are trying to pick up the picture to show someone, you had better be perceiving information about the picture as a surface. If you try to reach for a picture of a coffee cup using information about the cup you will fail. However, you can also attend to the information presented about the cup. You can't use this information in the online control of an action but you can use it to identify the object. This matches the 'dorsal/ventral stream' distinction and relates to the thoughts I had in this post about Withagen on illusions. The behaviour you get out of the system will be a function of the information the system was using at the time and whether that information can support the behaviour in question.
If the field of action priming really wants to figure out what's going on, it has to front up to the fact that it's using pictures of objects. But doing so solves the 'mental set' question: the effects on behaviour depend on which set of information variables the person is currently attending to. When action priming shows up, it indexes the fact that people were attending to the thing depicted and not the picture per se. Taking explicit control over this attentional issue would make this task a window into the perception of pictures and the nature of the information in those pictures, all very interesting questions. What it will never tell us about is the perception of object affordances.
Coda
Bonus points to JEP:HPP for publishing what is effectively a series of null results and failures to replicate. I don't know if they would have published without the last positive result demonstrating the role of instructions, but good job anyway.
References
Yu, A. B., Abrams, R. A., & Zacks, J. M. (2014). Limits on action priming by pictures of objects. Journal of Experimental Psychology: Human Perception and Performance, 40(5), 1861. Download ($$)
If the field of action priming really wants to figure out what's going on, it has to front up to the fact that it's using pictures of objects. But doing so solves the 'mental set' question: the effects on behaviour depend on which set of information variables the person is currently attending to. When action priming shows up, it indexes the fact that people were attending to the thing depicted and not the picture per se. Taking explicit control over this attentional issue would make this task a window into the perception of pictures and the nature of the information in those pictures, all very interesting questions. What it will never tell us about is the perception of object affordances.
Coda
Bonus points to JEP:HPP for publishing what is effectively a series of null results and failures to replicate. I don't know if they would have published without the last positive result demonstrating the role of instructions, but good job anyway.
References
Yu, A. B., Abrams, R. A., & Zacks, J. M. (2014). Limits on action priming by pictures of objects. Journal of Experimental Psychology: Human Perception and Performance, 40(5), 1861. Download ($$)
Wilson, A. D., Tresilian, J. R., & Schlaghecken, F. (2010). Continuous priming effects on discrete response choices. Brain and Cognition, 74(2), 152-159. Download
Very interesting article (and post)! How do you think that more real-looking stimuli such as videos vs pictural stimuli affect one's mental set? Is it the case that here people are more affected by the information depicted by the stimulus rather than the information that the image shows a (moving) image of an object?
ReplyDelete