My PhD student Daniel Leach has just had his first paper accepted (preregistration, preprint, data & analysis files available here on the OSF) so it's way past time when I should blog this cool work. Danny and I have been developing methods, analyses and a theoretical framework to study learning and transfer of learning, and we have some interesting results (plus MANY more questions :) This post is about the first experiment just published; there's more to come!
We use coordinated rhythmic movement as our task; I've blogged this task in many posts and used this research programme as an example of theoretically driven, mechanistic modelling science. The basic form of the task is described here, the basic pattern of behavioural data is described here, and the model that implements our perception-action approach is described here. The main thing to know is that there are only a couple of rhythmic coordinations that are easy without training (0° and 180°), but other coordinations can be learned with feedback driven training. This gives us a simple model task that can serve as a window on perception-action mechanisms of skilled action and learning.
Showing posts with label direct learning. Show all posts
Showing posts with label direct learning. Show all posts
Tuesday, 18 February 2020
Thursday, 1 November 2018
Where is the Haptic Information? (A Purple Peril)
Haptics (or proprioception) is the sensory modality built into our bodies; it's provides constant information about the state of the body and things it is in mechanical contact with, such as tools. Many ecological psychologists (myself included) have investigated haptic perception and it's role in the control of action, but unlike the optic array, we have basically zero work identifying what the relevant information variables look like.
I first investigated haptic perception in the context of coordinated rhythmic movements (Wilson, Bingham & Craig, 2003). Geoff had run studies showing that visual judgements of different relative phase varied in stability in the same way that the production of those relative phases does. This suggested that the movement phenomena were being caused by the way relative phase is perceived. This was vision, however, and the movement phenomena obviously involve motion of the body and the haptic system. This involvement was typically explained in terms of muscle homology and neural crosstalk effects. Our study had people track manipulanda that moved up and down one of three mean relative phases with various levels of phase variability added, and had them make judgements of that variability (replicating the visual studies). We found haptic perception of relative phase, as measured by those judgements, behaved just like visual perception of relative phase - we inferred that the information, the relative direction of motion, can be detected by both systems and has the same effects.
I am moving back into the haptic information world for two related reasons.
First, I want to replace the muscle homology/neural crosstalk stories with a haptic perception story. The effects these theories account for are very large and reliable, and Geoff's perception-action model currently only applies to visual information. Specifically, muscle homology applies to relative phase defined in an egocentric (body centred) frame of reference, while Geoff's model applies to relative phase defined in an allocentric (external) frame of reference. Relative phase is clearly detected in both frames of references; when they are pitted against one another experimentally, both matter and the egocentric effects dominate (e.g. Pickavance, Azmoodah & Wilson, 2018).
Second, I have become interested in individual variation in the variables used to perceive relative phase. Based on his data, Geoff's model predicts relative phase is perceived via the information variable relative direction of motion, the detection of which is modified by the relative speed of the oscillators. In Wilson & Bingham (2008; blog post), we showed this was true in 7 out of 10 untrained participants judging 0° and 180°. The other three became unable to judge these phases when we perturbed another candidate variable, relative position. This experiment also showed that people trained to perceive 90° had improved because they had switched to this variable, but we were not expecting people at the other relative phases to be using this variable. I'm finally getting back into experiments probing the prevalance of this individual difference in visual information use and the consequences for perception-action stability (briefly: there's a lot of variation and it matters!). As part of the above project, I want to do the same kinds of studies on haptic perception too.
My problem here is, there is essentially no information in the literature on the nature of haptic information variables. This Peril lays out my current hypothesis about where to look; please, dear God, come help me!
I first investigated haptic perception in the context of coordinated rhythmic movements (Wilson, Bingham & Craig, 2003). Geoff had run studies showing that visual judgements of different relative phase varied in stability in the same way that the production of those relative phases does. This suggested that the movement phenomena were being caused by the way relative phase is perceived. This was vision, however, and the movement phenomena obviously involve motion of the body and the haptic system. This involvement was typically explained in terms of muscle homology and neural crosstalk effects. Our study had people track manipulanda that moved up and down one of three mean relative phases with various levels of phase variability added, and had them make judgements of that variability (replicating the visual studies). We found haptic perception of relative phase, as measured by those judgements, behaved just like visual perception of relative phase - we inferred that the information, the relative direction of motion, can be detected by both systems and has the same effects.
I am moving back into the haptic information world for two related reasons.
First, I want to replace the muscle homology/neural crosstalk stories with a haptic perception story. The effects these theories account for are very large and reliable, and Geoff's perception-action model currently only applies to visual information. Specifically, muscle homology applies to relative phase defined in an egocentric (body centred) frame of reference, while Geoff's model applies to relative phase defined in an allocentric (external) frame of reference. Relative phase is clearly detected in both frames of references; when they are pitted against one another experimentally, both matter and the egocentric effects dominate (e.g. Pickavance, Azmoodah & Wilson, 2018).
Second, I have become interested in individual variation in the variables used to perceive relative phase. Based on his data, Geoff's model predicts relative phase is perceived via the information variable relative direction of motion, the detection of which is modified by the relative speed of the oscillators. In Wilson & Bingham (2008; blog post), we showed this was true in 7 out of 10 untrained participants judging 0° and 180°. The other three became unable to judge these phases when we perturbed another candidate variable, relative position. This experiment also showed that people trained to perceive 90° had improved because they had switched to this variable, but we were not expecting people at the other relative phases to be using this variable. I'm finally getting back into experiments probing the prevalance of this individual difference in visual information use and the consequences for perception-action stability (briefly: there's a lot of variation and it matters!). As part of the above project, I want to do the same kinds of studies on haptic perception too.
My problem here is, there is essentially no information in the literature on the nature of haptic information variables. This Peril lays out my current hypothesis about where to look; please, dear God, come help me!
Sunday, 5 November 2017
A Test of Direct Learning (Michaels et al, 2008)
Direct learning (Jacobs & Michaels, 2007) is an ecological hypothesis about the process of perceptual learning. I describe the theory here, and evaluate it here. One of the current weaknesses is little direct empirical support; the 2007 paper only reanalysed earlier studies from the new perspective. Michaels et al (2008) followed up with a specific test of the theory in the context of dynamic touch. The study was designed to provide data that could be plotted in an information space, which provides some qualitative hypotheses about how learning should proceed.
There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.
There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.
Labels:
direct learning,
dynamic touch,
Jacobs,
learning,
Michaels
Friday, 3 November 2017
Evaluating 'Direct Learning'
In my previous post I laid out the direct learning framework developed by Jacobs & Michaels (2007). In this post, I'm going to evaluate the central claims and assumptions with a mechanistic eye. Specifically, my question is mainly going to be 'what are the real parts or processes that are implementing that idea?'.
This is a spectacularly complicated topic and I applaud Jacobs & Michaels for their gumption in tackling it and the clarity with which they went after it. I also respect the ecological rigour they have applied as they try to find a way to measure, analyse and drive learning in terms of information, and not loans on intelligence. It is way past time for ecological psychology to tackle the process of learning head on. I do think there are problems in the specific implementation they propose, and I'll spend some time here identifying those problems. I am not identifying these to kill off the idea, though; read this as me just at the stage of my thinking where I am identifying what I think I need to do to improve this framework and use it in my own science.
This is a spectacularly complicated topic and I applaud Jacobs & Michaels for their gumption in tackling it and the clarity with which they went after it. I also respect the ecological rigour they have applied as they try to find a way to measure, analyse and drive learning in terms of information, and not loans on intelligence. It is way past time for ecological psychology to tackle the process of learning head on. I do think there are problems in the specific implementation they propose, and I'll spend some time here identifying those problems. I am not identifying these to kill off the idea, though; read this as me just at the stage of my thinking where I am identifying what I think I need to do to improve this framework and use it in my own science.
Thursday, 2 November 2017
Direct Learning (Jacobs & Michaels, 2007)
The ecological hypothesis is that we perceive properties of the environment and ourselves using information variables that specify those properties. We have to learn to use these variables; we have to learn to detect them, and then we have to learn what dynamical properties they specify.
Learning to detect variables takes time, so our perceptual systems will only be able to become sensitive to variables that persist for long enough. The only variables that are sufficiently stable are those that can remain invariant over a transformation, and the only variables that can do this are higher order relations between simpler properties. We therefore don't learn to use the simpler properties, we learn to use the relations themselves, and these are what we call ecological information variables. (Sabrina discusses this idea in this post, where she explains why these information variables are not hidden in noise and why the noise doesn't have to be actively filtered out.)
Detecting variables is not enough, though. You then have to learn what dynamical property that kinematic variable is specifying. This is best done via action; you try to coordinate and control an action using some variable and then adapt or not as a function of how well that action works out.
While a lot of us ecological people studying learning, there was not, until recently, a more general ecological framework for talking about learning. Jacobs & Michaels (2007) proposed such a framework, and called it direct learning (go listen to this podcast by Rob Gray too). We have just had a fairly intense lab meeting about this paper and this is an attempt to note all the things we figured out as we went. In this post I will summarise the key elements, and then in a follow-up I will evaluate those elements as I try and apply this framework to some recent work I am doing on the perception of coordinated rhythmic movements.
Learning to detect variables takes time, so our perceptual systems will only be able to become sensitive to variables that persist for long enough. The only variables that are sufficiently stable are those that can remain invariant over a transformation, and the only variables that can do this are higher order relations between simpler properties. We therefore don't learn to use the simpler properties, we learn to use the relations themselves, and these are what we call ecological information variables. (Sabrina discusses this idea in this post, where she explains why these information variables are not hidden in noise and why the noise doesn't have to be actively filtered out.)
Detecting variables is not enough, though. You then have to learn what dynamical property that kinematic variable is specifying. This is best done via action; you try to coordinate and control an action using some variable and then adapt or not as a function of how well that action works out.
While a lot of us ecological people studying learning, there was not, until recently, a more general ecological framework for talking about learning. Jacobs & Michaels (2007) proposed such a framework, and called it direct learning (go listen to this podcast by Rob Gray too). We have just had a fairly intense lab meeting about this paper and this is an attempt to note all the things we figured out as we went. In this post I will summarise the key elements, and then in a follow-up I will evaluate those elements as I try and apply this framework to some recent work I am doing on the perception of coordinated rhythmic movements.
Subscribe to:
Posts (Atom)