One of my main experimental tasks is coordinated rhythmic movement. This is a simple lab task in which I ask people to produce rhythmic movements (typically with a joystick) and coordinate those at some mean relative phase. Not all coordinations are equally easy; without training, people can typically only reliably produce 0° (in-phase) and 180° (anti-phase) movements. People can learn other coordinations, however; I typically train the maximally difficult 90° (although my PhD student has just completed a study training people at 60°; more on that awesome data shortly). I use coordination to study the perceptual control of action and learning.
My work is all designed to test and extend Bingham's mechanistic model of coordination dynamics. This model explicitly identifies all the actual components of the perception-action system producing the behaviour, and models them. In particular, it models the perceptual information we use to perceive relative phase; the relative direction of motion. This is an important contributor to coordination stability and this model is a real step up in terms of how we do business in psychology.
There is another factor that affects coordination stability, however, and the model currently only addresses this implicitly. That factor is muscle homology, and it's been repeatedly shown to be an important factor. For a long time, I have avoided worrying about it, because I have had no mechanistic way to talk about it. I think I have the beginnings of a way now, though, and this post is the first of several as I develop my first draft of that analysis.
Tuesday, 19 December 2017
Sunday, 5 November 2017
A Test of Direct Learning (Michaels et al, 2008)
Direct learning (Jacobs & Michaels, 2007) is an ecological hypothesis about the process of perceptual learning. I describe the theory here, and evaluate it here. One of the current weaknesses is little direct empirical support; the 2007 paper only reanalysed earlier studies from the new perspective. Michaels et al (2008) followed up with a specific test of the theory in the context of dynamic touch. The study was designed to provide data that could be plotted in an information space, which provides some qualitative hypotheses about how learning should proceed.
There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.
There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.
Labels:
direct learning,
dynamic touch,
Jacobs,
learning,
Michaels
Friday, 3 November 2017
Evaluating 'Direct Learning'
In my previous post I laid out the direct learning framework developed by Jacobs & Michaels (2007). In this post, I'm going to evaluate the central claims and assumptions with a mechanistic eye. Specifically, my question is mainly going to be 'what are the real parts or processes that are implementing that idea?'.
This is a spectacularly complicated topic and I applaud Jacobs & Michaels for their gumption in tackling it and the clarity with which they went after it. I also respect the ecological rigour they have applied as they try to find a way to measure, analyse and drive learning in terms of information, and not loans on intelligence. It is way past time for ecological psychology to tackle the process of learning head on. I do think there are problems in the specific implementation they propose, and I'll spend some time here identifying those problems. I am not identifying these to kill off the idea, though; read this as me just at the stage of my thinking where I am identifying what I think I need to do to improve this framework and use it in my own science.
This is a spectacularly complicated topic and I applaud Jacobs & Michaels for their gumption in tackling it and the clarity with which they went after it. I also respect the ecological rigour they have applied as they try to find a way to measure, analyse and drive learning in terms of information, and not loans on intelligence. It is way past time for ecological psychology to tackle the process of learning head on. I do think there are problems in the specific implementation they propose, and I'll spend some time here identifying those problems. I am not identifying these to kill off the idea, though; read this as me just at the stage of my thinking where I am identifying what I think I need to do to improve this framework and use it in my own science.
Thursday, 2 November 2017
Direct Learning (Jacobs & Michaels, 2007)
The ecological hypothesis is that we perceive properties of the environment and ourselves using information variables that specify those properties. We have to learn to use these variables; we have to learn to detect them, and then we have to learn what dynamical properties they specify.
Learning to detect variables takes time, so our perceptual systems will only be able to become sensitive to variables that persist for long enough. The only variables that are sufficiently stable are those that can remain invariant over a transformation, and the only variables that can do this are higher order relations between simpler properties. We therefore don't learn to use the simpler properties, we learn to use the relations themselves, and these are what we call ecological information variables. (Sabrina discusses this idea in this post, where she explains why these information variables are not hidden in noise and why the noise doesn't have to be actively filtered out.)
Detecting variables is not enough, though. You then have to learn what dynamical property that kinematic variable is specifying. This is best done via action; you try to coordinate and control an action using some variable and then adapt or not as a function of how well that action works out.
While a lot of us ecological people studying learning, there was not, until recently, a more general ecological framework for talking about learning. Jacobs & Michaels (2007) proposed such a framework, and called it direct learning (go listen to this podcast by Rob Gray too). We have just had a fairly intense lab meeting about this paper and this is an attempt to note all the things we figured out as we went. In this post I will summarise the key elements, and then in a follow-up I will evaluate those elements as I try and apply this framework to some recent work I am doing on the perception of coordinated rhythmic movements.
Learning to detect variables takes time, so our perceptual systems will only be able to become sensitive to variables that persist for long enough. The only variables that are sufficiently stable are those that can remain invariant over a transformation, and the only variables that can do this are higher order relations between simpler properties. We therefore don't learn to use the simpler properties, we learn to use the relations themselves, and these are what we call ecological information variables. (Sabrina discusses this idea in this post, where she explains why these information variables are not hidden in noise and why the noise doesn't have to be actively filtered out.)
Detecting variables is not enough, though. You then have to learn what dynamical property that kinematic variable is specifying. This is best done via action; you try to coordinate and control an action using some variable and then adapt or not as a function of how well that action works out.
While a lot of us ecological people studying learning, there was not, until recently, a more general ecological framework for talking about learning. Jacobs & Michaels (2007) proposed such a framework, and called it direct learning (go listen to this podcast by Rob Gray too). We have just had a fairly intense lab meeting about this paper and this is an attempt to note all the things we figured out as we went. In this post I will summarise the key elements, and then in a follow-up I will evaluate those elements as I try and apply this framework to some recent work I am doing on the perception of coordinated rhythmic movements.
Saturday, 21 October 2017
What Limits the Accuracy of Human Throwing?
Throwing a projectile in order to hit a target requires you to produce one lot of the set of release parameters that result in a hit; release angle, velocity (speed and direction) and height (relative to the target). My paper last year on the affordances of targets quantified these sets using a task dynamical analysis.
There is one additional constraint; these release parameters have to occur during a very short launch window. This window is the part of the hand's trajectory during which the ball must be released in order to intercept the target. It is very easy to release slightly too late (for example) and drill the projectile into the ground.
How large is this launch window? It is surprisingly, terrifyingly small; Calvin (1983) and Chowdhary & Challis (1999) have suggested it is on the order of 1ms. Those papers used a sensitivity analysis on simulated trajectories to show that accuracy is extremely sensitive to timing errors and this millisecond level precision is required to produce an accurate throw.
Smeets, Frens & Brenner (2002) tested this hypothesis with dart throwing. If this intense pressure on timing the launch window determines accuracy, then throwers should organise their behaviour and throw in a way that makes their launch window as tolerant of errors as possible. They replicated the sensitivity analyses on human data to see if people try to give themselves the maximum error tolerance in the launch, or whether they were trying to accommodate errors in other variables.
What they found is that the launch window timing is not the limiting factor. Their throwers (who were not especially expert) did not throw so as to minimise the sensitivity of the launch window timing to errors. Quite the contrary; they lived in a fairly sensitive region of the space, and then didn't make timing errors. They did throw so as to reduce the sensitivity to speed errors, however, and errors in the targeting came from errors in the spatial path of the hand that the system did not adequately compensate for, rather than the timing of the hand's release. (The authors saw some evidence that the position, speed and direction of the hand trajectory were organised into a synergy, which aligns nicely with the motor abundance hypothesis).
I would like to replicate and extend this analysis process using more detailed simulations and data from better throwers. I've become convinced it's a very useful way to think of what is happening during the throw. I also think these results point to some interesting things about throwing. Specifically, while timing and speed must both be produced with great accuracy, the system has developed two distinct solutions to coping with errors. Timing errors are reduced by evolving neural systems that can reliably produce the required precision. Speed errors have been left to an online perception-action control process which adapts the throw to suit local demands. The latter is the more robust solution; so why was timing solved with brain power?
There is one additional constraint; these release parameters have to occur during a very short launch window. This window is the part of the hand's trajectory during which the ball must be released in order to intercept the target. It is very easy to release slightly too late (for example) and drill the projectile into the ground.
How large is this launch window? It is surprisingly, terrifyingly small; Calvin (1983) and Chowdhary & Challis (1999) have suggested it is on the order of 1ms. Those papers used a sensitivity analysis on simulated trajectories to show that accuracy is extremely sensitive to timing errors and this millisecond level precision is required to produce an accurate throw.
Smeets, Frens & Brenner (2002) tested this hypothesis with dart throwing. If this intense pressure on timing the launch window determines accuracy, then throwers should organise their behaviour and throw in a way that makes their launch window as tolerant of errors as possible. They replicated the sensitivity analyses on human data to see if people try to give themselves the maximum error tolerance in the launch, or whether they were trying to accommodate errors in other variables.
What they found is that the launch window timing is not the limiting factor. Their throwers (who were not especially expert) did not throw so as to minimise the sensitivity of the launch window timing to errors. Quite the contrary; they lived in a fairly sensitive region of the space, and then didn't make timing errors. They did throw so as to reduce the sensitivity to speed errors, however, and errors in the targeting came from errors in the spatial path of the hand that the system did not adequately compensate for, rather than the timing of the hand's release. (The authors saw some evidence that the position, speed and direction of the hand trajectory were organised into a synergy, which aligns nicely with the motor abundance hypothesis).
I would like to replicate and extend this analysis process using more detailed simulations and data from better throwers. I've become convinced it's a very useful way to think of what is happening during the throw. I also think these results point to some interesting things about throwing. Specifically, while timing and speed must both be produced with great accuracy, the system has developed two distinct solutions to coping with errors. Timing errors are reduced by evolving neural systems that can reliably produce the required precision. Speed errors have been left to an online perception-action control process which adapts the throw to suit local demands. The latter is the more robust solution; so why was timing solved with brain power?
Labels:
affordances,
launch window,
motor abundance hypothesis,
motor control,
Smeets,
throwing,
timing,
UCM
Saturday, 16 September 2017
The Information for Progressive Occlusion
Gibson's ecological psychology is weird, if you are coming from a more traditional information-processing background. The two approaches make radically different assumptions about the nature of the world to be perceived; they have radically different ontologies. This means that there is little if any useful overlap in the way they do things, and communicating across the gap is very hard. I have a recent paper - preprint here - where I go into detail about the two ontologies as I defend ecological psychology from interface theory. It's essentially Turvey et al, 1981, but that's a bear of a read if you aren't already ecologically minded. Do mine first :)
Anyway, concrete examples help. My go-to is the outfielder problem but people are tired of that one. My other favourite is progressive occlusion (Gibson, Kaplan, Reynolds & Wheeler, 1969; Kaplan, 1969). Gibson worked this example up himself in great detail and so it stands as a nice concrete example to illustrate some elements of the ecological ontology. Given the recent total solar eclipse, it seems like the right time to blog it!
This post will review occlusion, talk about how it works and work with some demos. These are all linked from here; there is Matlab/Psychtoolbox code to run a demo, a video of that running and a Powerpoint with some slides. I'll refer to these throughout - occlusion is a dynamic process and so you need to see it moving for it to make sense.
Anyway, concrete examples help. My go-to is the outfielder problem but people are tired of that one. My other favourite is progressive occlusion (Gibson, Kaplan, Reynolds & Wheeler, 1969; Kaplan, 1969). Gibson worked this example up himself in great detail and so it stands as a nice concrete example to illustrate some elements of the ecological ontology. Given the recent total solar eclipse, it seems like the right time to blog it!
This post will review occlusion, talk about how it works and work with some demos. These are all linked from here; there is Matlab/Psychtoolbox code to run a demo, a video of that running and a Powerpoint with some slides. I'll refer to these throughout - occlusion is a dynamic process and so you need to see it moving for it to make sense.
Thursday, 31 August 2017
Expectations and the Size-Weight Illusion
The size-weight illusion (SWI) occurs when people are asked to judge the weights of two different sized but identically weighted objects. The smaller object is judged to be heavier. There are a variety of explanations for this illusion (see Buckingham, 2014 for a review). I'm going to be reviewing some papers on it as I develop some experiments connected to my throwing research.
One set of explanations is 'bottom up', i.e. perceptual. Amazeen & Turvey, 1996 suggested that people do not perceive weight but inertia (this is the dynamic touch hypothesis about the inertia tensor) and Zhu & Bingham (2011) have proposed the illusion is not the misperception of weight but the correct perception of throwability (I obviously quite like this one, and have discussed it here). Interestingly Zhu et al (2013) have since shown that the inertia tensor does not explain the throwing related SWI!
The second set of explanations is 'top down'. The basic hypothesis is that the sensorimotor system expects larger things to weigh more than smaller things, within a class of 'things'. This expectation has been learned over time via experience of the real world in which this is basically true. Large mugs weight more than small mugs, even if large mugs weigh less than small anvils.
There are two interesting papers that have looked at the top-down hypothesis.
One set of explanations is 'bottom up', i.e. perceptual. Amazeen & Turvey, 1996 suggested that people do not perceive weight but inertia (this is the dynamic touch hypothesis about the inertia tensor) and Zhu & Bingham (2011) have proposed the illusion is not the misperception of weight but the correct perception of throwability (I obviously quite like this one, and have discussed it here). Interestingly Zhu et al (2013) have since shown that the inertia tensor does not explain the throwing related SWI!
The second set of explanations is 'top down'. The basic hypothesis is that the sensorimotor system expects larger things to weigh more than smaller things, within a class of 'things'. This expectation has been learned over time via experience of the real world in which this is basically true. Large mugs weight more than small mugs, even if large mugs weigh less than small anvils.
There are two interesting papers that have looked at the top-down hypothesis.
Labels:
affordances,
calibration,
expectations,
size-weight illusion,
throwing
Monday, 17 July 2017
Dear Disney; Let Me Help You VR
Disney posted a video recently from some researchers getting people to catch real balls in virtual reality (VR). It was a nice demo of some technology, and I don't actually want to be down on these researchers, but of course the psychology was lacking and there were some weird moments which I thought I would note for posterity. Also, Disney researchers, if you're reading, call me :)
Labels:
Disney,
information,
outfielder problem,
task dynamics,
virtual reality,
VR
Sunday, 21 May 2017
Ecological Information Is a Perceptual Mapping That Tracks Evolutionary Fitness
Interface theory in cartoon form. Thanks to Louise Barrett for reminding me this exists :) |
Thanks to chats on Twitter (thanks Greg!) and applying the basic move of 'those aren't working but IT is still weird, what's next?', my new line of attack relates to a result from their simulations.
Labels:
ecological laws,
Gibson,
Hoffman,
information,
interface theory,
science
Friday, 19 May 2017
Does Interface Theory Have Consequences for the Ecological Approach?
I've been working on a commentary about interface theory (Hoffman, Singh & Prakash, 2015) which I have previously blogged about here. I'm still interested because it is, in part, a fairly direct shot at the ecological approach and I'm always keen to break those if I can. My piece stalled out, though, so I thought I'd spend some time here thinking out loud about the argument that stalled and another critique that came up as I re-read the paper.
To unbury the lede I just finished writing: the primary thing Hoffman et al get wrong about Gibson is that they think he wanted his theory to produce veridical perceptions, not simply adaptive ones. Gibson actually wanted adaptive perceptions, but found a way in which they were also veridical. This emphasis matters; Gibson does not stand or fall on issues of veridicality. In addition, every one of Hoffman et al's big swings apply only to inferential, constructivist theories of perception; Gibson is immune on these grounds as well. All Hoffman et al have done is redraw the terrain inferential theories have to traverse, and it will be interesting to see if anyone takes the bait. But the major argument simply remains, is perception inferential or ecological, and may the best data set win.
To unbury the lede I just finished writing: the primary thing Hoffman et al get wrong about Gibson is that they think he wanted his theory to produce veridical perceptions, not simply adaptive ones. Gibson actually wanted adaptive perceptions, but found a way in which they were also veridical. This emphasis matters; Gibson does not stand or fall on issues of veridicality. In addition, every one of Hoffman et al's big swings apply only to inferential, constructivist theories of perception; Gibson is immune on these grounds as well. All Hoffman et al have done is redraw the terrain inferential theories have to traverse, and it will be interesting to see if anyone takes the bait. But the major argument simply remains, is perception inferential or ecological, and may the best data set win.
Labels:
Gibson,
Hoffman,
information,
interface theory,
theories of psychology
Tuesday, 2 May 2017
Exploring Some Handwriting Data (Experiment 1)
I have been trying to science handwriting for a year or two now, and I've had some time to dive into some recent data I collected to address some issues coming up in earlier studies. I had first run two training studies and analysed them using the lognormal model (which I blogged about here), but I immediately realised we were facing some wild individual variation; there are many ways to produce the necessary movement kinematics for a given letter and they might all be just fine. There is no single right way to produce a letter, so long as it's legible.
I therefore ran a simple study to quantify the within and between participant variation in letter production, as measured using the lognormal parameters nbLog and SNR/nbLog. A quick reminder; SNR is the signal-to-noise ratio and is a measure of the model fit; nbLog is the number of lognormal curves needed to fit the data; and the ratio of the two takes the model fit and penalises it by how hard the model had to work to get there. The data are here if you care to play.
Participants viewed each letter of the alphabet, one at a time on a screen. Their job was to simply write that letter on a Wacom tablet where I could record the 2D kinematics of their movements. People saw each letter 10 times in a fully randomised order for a total of 260 trials.
Note: what is coming is entirely exploratory. I am literally just poking around to map out what I'm up against given the nature of the DVs. I am still figuring out the right analysis to capture what I want to say, so any thoughts welcome.
I therefore ran a simple study to quantify the within and between participant variation in letter production, as measured using the lognormal parameters nbLog and SNR/nbLog. A quick reminder; SNR is the signal-to-noise ratio and is a measure of the model fit; nbLog is the number of lognormal curves needed to fit the data; and the ratio of the two takes the model fit and penalises it by how hard the model had to work to get there. The data are here if you care to play.
Participants viewed each letter of the alphabet, one at a time on a screen. Their job was to simply write that letter on a Wacom tablet where I could record the 2D kinematics of their movements. People saw each letter 10 times in a fully randomised order for a total of 260 trials.
Note: what is coming is entirely exploratory. I am literally just poking around to map out what I'm up against given the nature of the DVs. I am still figuring out the right analysis to capture what I want to say, so any thoughts welcome.
Labels:
data,
handwriting,
lognormal model,
science
Wednesday, 19 April 2017
Some Thoughts on Handwriting
A few years a go I reviewed a paper about a method, the sigma-lognormal model, to quantitatively assess handwriting (Plamondon et al, 2013). I was interested because I had in the past worked with children with developmental coordination disorder on a project developing ways to take better movement assessment out of the lab and into the clinic, and handwriting is a) something kids and their parents value and want to improve but b) is a beast to quantify.
Réjean Plamondon kindly sent me his analysis software to play with, and I have three experiments worth of data I am currently analysing in an effort to assess whether it can help me find what I want. Here I'll briefly review the model, the experiments and some lessons I've learned training myself to write with my nondominant right hand.
Réjean Plamondon kindly sent me his analysis software to play with, and I have three experiments worth of data I am currently analysing in an effort to assess whether it can help me find what I want. Here I'll briefly review the model, the experiments and some lessons I've learned training myself to write with my nondominant right hand.
Wednesday, 5 April 2017
The Ecological Approach, Explained to an 8 Year Old
About 3 weeks ago I got an email from a person who had found our blog via Robert Epstein's piece 'The Empty Brain'. The email said
I've had a good read this afternoon, and it has been informative to some degree, however ...Ed Yong has taught me that good science communication doesn't have to be dumbed down, it just has to be pitched right, and while I am no Ed Yong, I say, challenge accepted! Let me know how it goes!
I have an 8 year old son, and due to questions we both have, we have had some very interesting laypeople's conversations about the nature of experience and "the mind" (is it a thing, a physical thing, a process?) as well as such things as memory, embodiment and perception.
It seems it would be really helpful for us (and by extension, possibly many others?) if you could summarise the broad strokes of your theory in some way in which an intelligent 8 year old (and his father!) could understand.
Would this be possible?
Saturday, 25 March 2017
Language, thought and the ecological approach; A Purple Peril
As part of a class on cognitive psychology, I give a seminar in which we talk about the research on the relationship between language and thought. In particular, I show this great talk by Lera Boroditsky as a starting point. She talks about the kind of research in this area, and talks about results such as how we linguistically interact with space and time affecting how we physically interact with these things. For example, some languages like English use an egocentric frame of reference when talking about space (e.g. describing things as being to the left or right, where the origin of this space is the speaker). Other languages use a geocentric frame of reference (e.g. describing things as being to the south of you). In order to be able to speak and understand the language, you therefore have to be able to remain oriented in space, and speakers of these kinds of languages have been shown to be capable of impressive feats of dead reckoning previously thought impossible in humans.
The reason this is all interesting is in the context of how the field is changing how it thinks about language; is it magical, or merely interesting? If the former, language becomes a unique human cognitive capacity that requires specific neural mechanisms that serve language and nothing else. If the latter, language becomes an integrated part of our cognitive systems and we should expect it to show these connections to other capacities.
The weight of evidence right now I think favours the latter view. In fact, one whole strand of embodied cognition (Shapiro’s ‘conceptualisation’ hypothesis strand) explicitly pursues these connections between language and other capacities, for example Lakoff’s work on metaphors being grounded in action. Language, while still phenomenal in what it can do, is not different in kind to the rest of cognition.
The field is still very much at the ‘functional model’ stage of developing explanations, however. The research mostly just catalogues linguistic differences and cognitive differences and works to map those onto each other in a fairly metaphorical, word-association kind of way (e.g. politics is talked about in terms of left and right wing so this should connect to physical movements to the left and the right). Our ecological questions has become, what kind of mechanism might allow this kind of cross-talk, and as I’ve been chatting to students I’ve been connecting a few dots for myself. This post sketches the outline of a mechanistic, ecological research programme for attacking the fascinating problem of the relationship between language and thought.
The reason this is all interesting is in the context of how the field is changing how it thinks about language; is it magical, or merely interesting? If the former, language becomes a unique human cognitive capacity that requires specific neural mechanisms that serve language and nothing else. If the latter, language becomes an integrated part of our cognitive systems and we should expect it to show these connections to other capacities.
The weight of evidence right now I think favours the latter view. In fact, one whole strand of embodied cognition (Shapiro’s ‘conceptualisation’ hypothesis strand) explicitly pursues these connections between language and other capacities, for example Lakoff’s work on metaphors being grounded in action. Language, while still phenomenal in what it can do, is not different in kind to the rest of cognition.
The field is still very much at the ‘functional model’ stage of developing explanations, however. The research mostly just catalogues linguistic differences and cognitive differences and works to map those onto each other in a fairly metaphorical, word-association kind of way (e.g. politics is talked about in terms of left and right wing so this should connect to physical movements to the left and the right). Our ecological questions has become, what kind of mechanism might allow this kind of cross-talk, and as I’ve been chatting to students I’ve been connecting a few dots for myself. This post sketches the outline of a mechanistic, ecological research programme for attacking the fascinating problem of the relationship between language and thought.
Labels:
convention,
information,
language,
mechanism,
Purple Peril,
transfer of learning
Sunday, 26 February 2017
Evidence for the Kinematic Specification of Dynamics
Gibson’s most detailed analysis of the KSD problem came from work on the perception of dynamic occlusion (viewing one surface become progressively hidden behind another as they move; Gibson, Kaplan, Reynolds & Wheeler, 1969; Kaplan, 1969). As one surface goes behind another, the sensations coming from the rear surface stop hitting the retina; they disappear. However, was is perceived is the progressive occlusion of a persisting surface; it is not disappearing, it is going out of view. Gibson and his students identified the kinematic pattern of transformation of the optic array that was specific to occlusion and distinguished it from the pattern specific to a surface actually going out of existence. In the former case, optical texture from the rear surface is progressively deleted over time from the optic array at an edge as it goes in behind the closer surface, and that texture progressively accretes as it comes back into view. In the latter case, there are a variety of transformations depending on how the surface is disappearing (melting vs being eaten, etc). Each event creates a specific optical pattern, but these patterns are not identical to the underlying dynamics. Observers, however, readily and easily perceive and report the underlying dynamics, not the optical patterns. Additional evidence that people are perceiving the dynamics comes from work in multiple object tracking (Scholl & Pylyshyn, 1999). People can track multiple moving targets over time, and can continue to do so even if the objects move in and out of view, but only if they do so in an occlusion event. If the objects go out of view by imploding, tracking goes to chance. In the occlusion case, the visual attention system continues to perceive a persisting object and can often pick it back up when it returns to view. In the imploding case, this system perceives that the object has ceased to exist, and it no longer tracks it.
Saturday, 18 February 2017
The Nature of Ecological Perceptual Information
The central issue in the perception of the world is how the perceptual system comes into contact with that world. In vision, the assumption for centuries has been that the experienced world must be recovered somehow from the 2D image of the world projected onto the retina by the lens of the eye. Scientists therefore hunted for patterns on the retina that preserved critical structure from the world by copying some part of that critical structure.
James J Gibson was, for a long time, no exception to this hunt. His early empirical work (grounded in the theory he laid out in The Perception of the Visual World; Gibson, 1950) created and manipulated retinal images that, for example, contained gradients of optical texture that matched gradients of physical texture created as surfaces receded in depth, or changed their shape or orientation relative to a point of observation. But time and again, Gibson found that perceptual experience was not any straight-forward function of retinal stimulation (i.e. sensations). People did not ‘see’ what was on the retina (Reed, 1988). The most powerful demonstration of this fact is Gibson’s analysis of dynamic occlusion (Gibson, Kaplan, Reynolds & Wheeler, 1969; Kaplan, 1969) to which we will return below.
Gibson’s later career was defined by the search for an explanation of how perception could be possible if it wasn’t based on sensations and retinal images. Gibson’s solution was his theory of the ecological information available for visual perception published in The Ecological Approach to Visual Perception (Gibson, 1979).
What follows is a description of ecological information with reference to some of Gibson’s work and the extensive research literature that has taken place since Gibson’s death in 1979. The focus will be on the information in light for vision, because that has been the focus of the research. However, the principles hold for all the various energy media our perceptual systems interact with, and we will review this briefer evidence as well.
James J Gibson was, for a long time, no exception to this hunt. His early empirical work (grounded in the theory he laid out in The Perception of the Visual World; Gibson, 1950) created and manipulated retinal images that, for example, contained gradients of optical texture that matched gradients of physical texture created as surfaces receded in depth, or changed their shape or orientation relative to a point of observation. But time and again, Gibson found that perceptual experience was not any straight-forward function of retinal stimulation (i.e. sensations). People did not ‘see’ what was on the retina (Reed, 1988). The most powerful demonstration of this fact is Gibson’s analysis of dynamic occlusion (Gibson, Kaplan, Reynolds & Wheeler, 1969; Kaplan, 1969) to which we will return below.
Gibson’s later career was defined by the search for an explanation of how perception could be possible if it wasn’t based on sensations and retinal images. Gibson’s solution was his theory of the ecological information available for visual perception published in The Ecological Approach to Visual Perception (Gibson, 1979).
What follows is a description of ecological information with reference to some of Gibson’s work and the extensive research literature that has taken place since Gibson’s death in 1979. The focus will be on the information in light for vision, because that has been the focus of the research. However, the principles hold for all the various energy media our perceptual systems interact with, and we will review this briefer evidence as well.
The Ecological Approach - The Accidental Textbook
Over the years this blog has collected 232 posts on a wide range of topics and we have multiple papers out and forthcoming working on the ecological approach to psychology and the behavioural sciences more generally. On the plus side, I have material all over on a wide range of topics; on the down side, I have material all over on a wide range of topics and it's hard sometimes to hand someone a useful entry point.
I always figured we would accidentally write a textbook on this blog and while we aren't there yet, we are working on a fairly comprehensive paper that tries to walk through the entire approach. Over the next few weeks, I'm going to post some excerpts from that work to a) have some focused information in the them and b) get some feedback about whether it makes sense.
At this point, I assume you are sufficiently intrigued by the ecological approach that you would like to know how it works. One motivation for getting to that point is the fact it allows mechanistic models of psychological phenomena; another may be a desire to be more embodied or enactive in your science. I'm not going to work on convincing you at the moment; I'm just going to try out the clearest explanations of all the key ideas I currently have in order to educate and get feedback.
I will come back and update this post with links to all future posts on this topic, in order (you can also search with this tag). If by any chance you are finding this material useful, drop me a line and let me know.
I always figured we would accidentally write a textbook on this blog and while we aren't there yet, we are working on a fairly comprehensive paper that tries to walk through the entire approach. Over the next few weeks, I'm going to post some excerpts from that work to a) have some focused information in the them and b) get some feedback about whether it makes sense.
At this point, I assume you are sufficiently intrigued by the ecological approach that you would like to know how it works. One motivation for getting to that point is the fact it allows mechanistic models of psychological phenomena; another may be a desire to be more embodied or enactive in your science. I'm not going to work on convincing you at the moment; I'm just going to try out the clearest explanations of all the key ideas I currently have in order to educate and get feedback.
I will come back and update this post with links to all future posts on this topic, in order (you can also search with this tag). If by any chance you are finding this material useful, drop me a line and let me know.
Textbook Posts
Subscribe to:
Posts (Atom)