Friday, 6 September 2013

Perceiving causes; why knowledge doesn't trump perception

You're an organism, wandering round the world. Some stuff happens; then, other stuff happens. How do we know whether the first stuff caused the second stuff to happen, or whether it's all just one damn thing after another? A new study in Psychological Science (Bechlivanidis & Lagnado, 2013) investigates whether what we know can affect what we perceive, and claims to show that perception can get overridden. This caught my eye because it sounds like the kind of result that will be a problem for our embodied cognition (Wilson & Golonka, 2013), but thinking through the experiment using the tools of dynamics and event perception shows that this result is not going to cause us much concern - it just isn't studying what it says it's studying.

How we know about causation is an old problem in philosophy (David Hume in particular worried about this a lot, and thought all we could ever get was associations between sequences of events). Psychology inherited this problem, but (as usual) it's based on the assumption that there is no perceptual information that could let you perceive the difference between one thing causing another and one thing simply following another. This is not the case. Events have structure and play out in very particular ways because they have a dynamic, and this dynamic can often create information which people can learn to use to perceive the dynamical event structure. Fly balls are an example of a projectile motion dynamic and the particular form this event therefore takes makes it possible to move and generate information that lets you catch the ball. 

The earliest experimental work on the perception of causation was by Michotte (1954/1963) who presented people with collision events. Collisions are useful because the dynamics aren't too complicated and they produce useful information. He experimentally manipulated the collisions to interfere with the dynamics; for example, sometimes too much time passed between the balls touching and the second ball moving off. People are very sensitive to this manipulation; two balls touching and the second bouncing away correctly looks like a collision, whereas even a brief delay made it look like the second ball was being 'launched' by something other than the first ball. They were perceptually very distinct events, and so Hume was wrong; perception is sufficient to identify causality.

As far as perception is concerned, therefore, events in the world are not merely made up of some stuff happening and then some other stuff happening. Events unfold in particular ways because of the dynamical players involved and, more importantly, there can be information that allows us to perceive the difference. But can our beliefs about causation still affect our perception of causation? A recent Psych Science article (Bechlivanidis & Lagnado, 2013) argues yes, and reports two experiments that demonstrate this. 

The Experiment
The set up uses objects moving around a desktop virtual environment powered by a physics engine (you can interact with the experimental conditions here). These objects moved and interacted with each other in particular ways and experimental participants learned the rules of the environment by trial and error learning (control participants did no training). 

The goal was to arrange the objects so that when you hit 'play', the objects moved and interacted so that a red rectangle drifted into the purple square. The game rules required, however, that the red rectangle had to get transformed into a red triangle by making the green square collide with the black platform, because the purple square only let stars cross it's border. So the green square had to hit the black platform before the red rectangle hit the purple square.

In the test video (see the Figure) the green square actually hit the black platform after the red rectangle. The critical events happened over about 300ms (so this was fast and hard to see; no measures of thresholds taken though). The task was to arrange a list of events in the order in which they happened in the test video.
The key result is that the group trained on the correct causal sequence were highly likely to get the order of events wrong; they reported what should have happened given the training, not what actually happened in the test video (see Figure 2)
Figure 2
The gray bars show that about 70% of trained participants (versus 40% of untrained controls) got the order wrong with respect to when the green square hit the black platform, and about 55% (versus about 10%) got when the rectangle transformed wrong. The trained people described the event that should have happened, not the one that did. 

Note that even the control group got it wrong a lot, suggesting the temporal separation of the key events was very hard to evaluate. Experiment 2 replicated and extended the findings using training on two events, the one from Experiment 1 and the one the incorrect control participants generally thought had happened. This produced stronger effects across the board. 

The upshot, according to the authors, is that people's perception of the event was being biased by their knowledge of the causal structure of the event; knowledge is beating perception.

The problem
We depend very heavily on being able to perceive the underlying causal, dynamical structure of events in the world. One reason is that we need to control our actions with respect to what's coming up, and there can only be information about what's coming up if things happen according to a dynamic. Using this information to control action is prospective control and it's a vital embodied solution to the problem of delays in nervous systems. So it's a problem if this information can be overridden (especially incorrectly) by acquired knowledge about the way things are "supposed" to go.

Luckily, I don't think this is what's going on here. The 'event' that people were trained on was not, in fact, a real event. It was, weirdly enough, just a set of things that happened one after the other. These things unfolded according a rule ('turn the red rectangle into a red triangle by making the green square hit the black platform before the red rectangle hits the purple box') but they didn't unfold according to a dynamic. There is nothing compulsory about this experimental event that means things have to happen in a certain way (in fact, during training, things often did happen in the 'wrong' order as people tried to figure out the rule). A fly ball, in contrast, has no choice but to move according to the dynamics of projectile motion once it's been hit and this stability is what allows an outfielder to move so as to produce information that allows them to intercept the ball. 

In the real world, the dynamics of events and affordances interact with energy like light to produce kinematic structure that is specific to the dynamics. People detect the structure and learn to use it as information that enables them to perceive the dynamics. In the world, this process is lawful and the information is therefore reliably present when the dynamic is. In the experiment, all that was present was the kinematics (some change over time) and these were not lawfully generated by a dynamic. It was 'virtual' reality and it broke the lawful link from world to information. People learned the contingencies (the non-compulsory but expected sequence) and, when asked to make a difficult judgment under uncertainty, relied on this training to help them. The task was clearly very difficult: the control group misperceived the sequence of events in Experiment 1 to quite a large extent!

This experiment was supposedly about the perception of causation, but because the displayed event was not really an event at all (because it was not tied together by causation at the level of dynamics, the required level of analysis for event identification; Bingham, 1995) the researchers did not allow the perceptual systems to show what they are truly capable of. In effect, they proved Hume right, by creating a virtual environment in which he had to be right and running their experiment there. But in the real world, there is typically information about the relevant underlying causal dynamical structure and we can use this to perceive the fact that our world is not random. This is perhaps the world we psychologists should be studying, and that Psychological Science should be publishing about.

Bechlivanidis C. & Lagnado D.A. (2013). Does the "Why" Tell Us the "When"?, Psychological Science, 24 (8) 1563-1572. DOI:

Michotte, A. (1963). The perception of causality. (T.R. Miles & E. Miles, Trans.). London: Methuen. (English translation of Michotte, 1954).

Wilson A.D. & Golonka S. (2013). Embodied Cognition is Not What you Think it is, Frontiers in Psychology, 4 DOI:


  1. I think your objection is circular - don't we 'see' the flyball dynamic because we have been trained to catch balls?

    If we haven't been trained, and so are not very good at catching balls, how would you measure our 'causal perception'?

    1. If you are bad at catching then your action measures of perception would reveal that you aren't detecting the relevant information that allows you to perceive the underlying dynamical event. If you want to identify what people are currently detecting and using, you have to run careful perturbation studies.

      Maybe I'm missing something about your point? Of course it takes learning to be able to detect the relevant optical structure and come to use that as information about the task dynamic that created the structure. My point here is that in the real world, information is connected to dynamics in a way that is not the case in this virtual reality experiment. The experiment is therefore not a fair test of 'knowledge' vs perception.

  2. i think you just killed shrodinger's cat.

  3. I really liked this discussion, but as a grad student still learning about dynamics, I had some questions. Isn't everything occurring in the world just one damn thing after another? Isn't the first commenter's point that if we had learned to interact in this kind of digital environment over time, we could have picked out the regularities in it? Is there something inherently privileged about the dynamics of a fly ball that makes that more useful to us than objects moving across a screen? Could we use an evolutionary argument here to say that our visual systems are tuned to pick up certain naturalistic types of motion well, but not non-natural motion patterns?

    1. Things in the real world don't simply happen one after another. Events have structure and unfold in particular ways; they have a dynamic. Take a simple pendulum system; once you set it running where it is at time t2 is importantly a function of where it was at t1, and the specific function that gets you from state to state is the relevant equation of motion.

      Fly balls are the same; once a ball has been hit by a batter and turned into a fly ball, it's motion is again described by the relevant dynamics.

      Perceptually, this means that once you have learned to detect information that specifies the whole event, perceiving that the event is running can be equivalent to perceiving the event as a whole. We critically depend on this to succeed in the world; if things were just one damned thing after another we'd always be operating in the dark about what's coming up in the near future, and this would be a disaster. Luckily it's not true!

      Does that answer the question?