How we know about causation is an old problem in philosophy (David Hume in particular worried about this a lot, and thought all we could ever get was associations between sequences of events). Psychology inherited this problem, but (as usual) it's based on the assumption that there is no perceptual information that could let you perceive the difference between one thing causing another and one thing simply following another. This is not the case. Events have structure and play out in very particular ways because they have a dynamic, and this dynamic can often create information which people can learn to use to perceive the dynamical event structure. Fly balls are an example of a projectile motion dynamic and the particular form this event therefore takes makes it possible to move and generate information that lets you catch the ball.
The earliest experimental work on the perception of causation was by Michotte (1954/1963) who presented people with collision events. Collisions are useful because the dynamics aren't too complicated and they produce useful information. He experimentally manipulated the collisions to interfere with the dynamics; for example, sometimes too much time passed between the balls touching and the second ball moving off. People are very sensitive to this manipulation; two balls touching and the second bouncing away correctly looks like a collision, whereas even a brief delay made it look like the second ball was being 'launched' by something other than the first ball. They were perceptually very distinct events, and so Hume was wrong; perception is sufficient to identify causality.
As far as perception is concerned, therefore, events in the world are not merely made up of some stuff happening and then some other stuff happening. Events unfold in particular ways because of the dynamical players involved and, more importantly, there can be information that allows us to perceive the difference. But can our beliefs about causation still affect our perception of causation? A recent Psych Science article (Bechlivanidis & Lagnado, 2013) argues yes, and reports two experiments that demonstrate this.
The set up uses objects moving around a desktop virtual environment powered by a physics engine (you can interact with the experimental conditions here). These objects moved and interacted with each other in particular ways and experimental participants learned the rules of the environment by trial and error learning (control participants did no training).
The goal was to arrange the objects so that when you hit 'play', the objects moved and interacted so that a red rectangle drifted into the purple square. The game rules required, however, that the red rectangle had to get transformed into a red triangle by making the green square collide with the black platform, because the purple square only let stars cross it's border. So the green square had to hit the black platform before the red rectangle hit the purple square.
In the test video (see the Figure) the green square actually hit the black platform after the red rectangle. The critical events happened over about 300ms (so this was fast and hard to see; no measures of thresholds taken though). The task was to arrange a list of events in the order in which they happened in the test video.
The key result is that the group trained on the correct causal sequence were highly likely to get the order of events wrong; they reported what should have happened given the training, not what actually happened in the test video (see Figure 2)
Note that even the control group got it wrong a lot, suggesting the temporal separation of the key events was very hard to evaluate. Experiment 2 replicated and extended the findings using training on two events, the one from Experiment 1 and the one the incorrect control participants generally thought had happened. This produced stronger effects across the board.
The upshot, according to the authors, is that people's perception of the event was being biased by their knowledge of the causal structure of the event; knowledge is beating perception.
We depend very heavily on being able to perceive the underlying causal, dynamical structure of events in the world. One reason is that we need to control our actions with respect to what's coming up, and there can only be information about what's coming up if things happen according to a dynamic. Using this information to control action is prospective control and it's a vital embodied solution to the problem of delays in nervous systems. So it's a problem if this information can be overridden (especially incorrectly) by acquired knowledge about the way things are "supposed" to go.
Luckily, I don't think this is what's going on here. The 'event' that people were trained on was not, in fact, a real event. It was, weirdly enough, just a set of things that happened one after the other. These things unfolded according a rule ('turn the red rectangle into a red triangle by making the green square hit the black platform before the red rectangle hits the purple box') but they didn't unfold according to a dynamic. There is nothing compulsory about this experimental event that means things have to happen in a certain way (in fact, during training, things often did happen in the 'wrong' order as people tried to figure out the rule). A fly ball, in contrast, has no choice but to move according to the dynamics of projectile motion once it's been hit and this stability is what allows an outfielder to move so as to produce information that allows them to intercept the ball.
In the real world, the dynamics of events and affordances interact with energy like light to produce kinematic structure that is specific to the dynamics. People detect the structure and learn to use it as information that enables them to perceive the dynamics. In the world, this process is lawful and the information is therefore reliably present when the dynamic is. In the experiment, all that was present was the kinematics (some change over time) and these were not lawfully generated by a dynamic. It was 'virtual' reality and it broke the lawful link from world to information. People learned the contingencies (the non-compulsory but expected sequence) and, when asked to make a difficult judgment under uncertainty, relied on this training to help them. The task was clearly very difficult: the control group misperceived the sequence of events in Experiment 1 to quite a large extent!
This experiment was supposedly about the perception of causation, but because the displayed event was not really an event at all (because it was not tied together by causation at the level of dynamics, the required level of analysis for event identification; Bingham, 1995) the researchers did not allow the perceptual systems to show what they are truly capable of. In effect, they proved Hume right, by creating a virtual environment in which he had to be right and running their experiment there. But in the real world, there is typically information about the relevant underlying causal dynamical structure and we can use this to perceive the fact that our world is not random. This is perhaps the world we psychologists should be studying, and that Psychological Science should be publishing about.
Bechlivanidis C. & Lagnado D.A. (2013). Does the "Why" Tell Us the "When"?, Psychological Science, 24 (8) 1563-1572. DOI: 10.1177/0956797613476046
Michotte, A. (1963). The perception of causality. (T.R. Miles & E. Miles, Trans.). London: Methuen. (English translation of Michotte, 1954).
Wilson A.D. & Golonka S. (2013). Embodied Cognition is Not What you Think it is, Frontiers in Psychology, 4 DOI: 10.3389/fpsyg.2013.00058