Showing posts with label prediction. Show all posts
Showing posts with label prediction. Show all posts

Wednesday, 5 April 2017

The Ecological Approach, Explained to an 8 Year Old

About 3 weeks ago I got an email from a person who had found our blog via Robert Epstein's piece 'The Empty Brain'. The email said
I've had a good read this afternoon, and it has been informative to some degree, however ...
I have an 8 year old son, and due to questions we both have, we have had some very interesting laypeople's conversations about the nature of experience and "the mind" (is it a thing, a physical thing, a process?) as well as such things as memory, embodiment and perception.
It seems it would be really helpful for us (and by extension, possibly many others?) if you could summarise the broad strokes of your theory in some way in which an intelligent 8 year old (and his father!) could understand.
Would this be possible?
Ed Yong has taught me that good science communication doesn't have to be dumbed down, it just has to be pitched right, and while I am no Ed Yong, I say, challenge accepted! Let me know how it goes!

Tuesday, 19 April 2016

Brains learn to perceive, not predict, the world

The current trendy way to think about brains is as predictive machines. Bayesian methods, free energy, generative models, and all the other things in Andy Clark's new book are where it's at. Brains have to predict because, in Clark's terms, we are 'surfing uncertainty' - one step from disaster as we control our behaviour using sensory information we had to spend time processing and that is therefore a few hundred milliseconds out of date. Our brains build models of the world that constantly generate active predictions about what sensation will tell us next, and our behaviour holds together to the extent that the models do.

The ecological approach provides a different job description for the brain (Charles, Golonka & Wilson, 2014). We are embedded in environments rich in information that specifies a wide variety of behaviourally relevant properties. Some of that information is prospective, that is, it is information available now that is about something in the future. Two examples are the information variables you can use to catch a fly ball; both are states of the current time that, when used, allow you to achieve a behaviour in the future (specifically, to be in the right place in the right time to catch that ball). Another example is tau and the various other variables for time-to-collision.

This post reviews a paper (van der Meer, Svantesson & van der Weel, 2012) that measured visual evoked potentials in infants longitudinally at two ages, using stimuli that 'loomed' (i.e. looked like they were going to collide with the infants). The data show that the infant brains were not learning to predict the world. Instead, neural activity became more tightly coupled to information about the time-to-collision. We learn to perceive, not predict, the world.

Sunday, 9 October 2011

Prospective Control I: The Outfielder Problem

A couple of posts ago I raised the distinction between prediction and prospective control. I was trying to make the point that, if you are coupled to the right information, you don't need to be mentally simulating what's happening so you can run this simulation ahead and predict what's coming up. Prediction of this sort is invoked by representational cognitive scientists to cope with things like delays in the nervous system (e.g. Changizi's 'perceiving-the-present' framework). It's a risky business (if you make a mistake, you are suddenly controlling your behaviour according to an incorrect guess), and the better solution is prospective control. This is when you couple your behaviour to information in the world that doesn't tell you about the future, but that, if you use it for a while, will get you where you want to go.

People requested some more on this topic, and so here we go. People also requested something other than the outfielder problem, but I am going to start with this problem because it is still the best example, and I'll get into some more as we go to demonstrate it's not just baseball where this works.

Friday, 23 September 2011

Embodied solutions to neural delays: Information and Network Motifs

One of the bugbears of direct perception is the fact of neural delays. The transmission of signals through the nervous system takes time, and this means that there is a lag between something happening (at, say, the retina) and that event having consequences in cortex, let alone behaviour. In control theory terms, delays in a system can lead to instability in that system's behaviour as you are forced to make corrections that are then incorrect and must themselves be corrected.

It's typically suggested in psychology that these delays are compensated for via computational predictions; the nervous system 'perceives the present' by taking the lagged input and using it as the basis of a guess about what's going on now (e.g. Changizi et al, 2008, plus see this post from a while back). This is a problematic idea: if the perceptual control of actions is based on a guess compensating for a variable time lag, then the stability issue remains, not to mention the consequences for mis-predicting the future. Regardless, it's not really an option for a theory of direct perception, and I want to discuss a couple of options.

Thursday, 18 March 2010

Internal representation or behavioural dynamics?

More on Gibson later, but I wanted to get to this today. Yesterday I saw a talk about eye tracking, and how people control smooth pursuit movements (the tracking movements your eyes can make when you’re following something continuously). Tracking performance is a quandary for cognitive folks, because we are often very good at it. For instance, if you ask people to track a moving stimulus and record their eye movements, they will successfully foveate the target with almost no lag or erratic need to play catch up (foveating means using the fovea, the densely packed high resolution region on the retina we rely on for precise visual perception). The lack of ‘catch-up’ is the interesting bit, and cognitive psychology thinks that it is evidence of prediction by the system. Prediction requires a predictor, which for cognitive psychology is always a representation.

The main thing I learned from this talk is this: I am a rampant ego-maniac who is convinced I am right and other people are wrong, but at least I am capable of entertaining the idea that there is another way to conceive of the task. This speaker (and at least one other person in the room) was completely unaware that their perspective entailed assumptions about the underlying mechanism and simply couldn’t conceive of another way to describe the task: for them, prediction was clearly required and therefore internal representation was clearly required.