Pages

Sunday, 5 November 2017

A Test of Direct Learning (Michaels et al, 2008)

Direct learning (Jacobs & Michaels, 2007) is an ecological hypothesis about the process of perceptual learning. I describe the theory here, and evaluate it here. One of the current weaknesses is little direct empirical support; the 2007 paper only reanalysed earlier studies from the new perspective. Michaels et al (2008) followed up with a specific test of the theory in the context of dynamic touch. The study was designed to provide data that could be plotted in an information space, which provides some qualitative hypotheses about how learning should proceed.

There are some minor devils in the detail; but overall this paper is a nice concrete tutorial on how to develop information spaces, how to test them empirically and how to evaluate the results that come out. The overall process will benefit from committing more fully to a mechanistic, real-parts criterion but otherwise shows real promise.  


Figure 1. The not-at-all-dodgy-looking set up
for dynamic touch

Dynamic Touch

The experimental task is to wield or swing rods that vary in their length. You hold the objects out of view by a common handle and try to judge the length just by moving the object. The judgements were made by having the participant move a marker on a ruler in front of them out to where they thought the end of the rod was (see Figure 1). 

Information Space for Dynamic Touch

Previous research has shown that two variables can account for over 90% of the variance in judgements; the 1st and 3rd moments of inertia. Michaels et al combined these into a single higher order relation with the equation 
E(b,c) = b ln(I1) + c ln(I3) 
Using logarithms allowed them to use multiple regression to estimate b and c, but otherwise there's no specific justification for proposing this particular higher order information space. 

This function has a single peak at a specifying combination and you can generate a gradient towards that peak to drive learning. Note that again, at this point, there is no independent evidence that this entire information space and information for learning (now referred to as convergence information) can be embodied in the dynamic touch energy array.
Figure 2. The information space (a) and information for learning vector
field for the hypothesised information variable E(b,c) = b ln(I1) + c ln(I3)
Pilot testing revealed people were already too good at the task, leaving little room for learning. Michaels et al then did a nice trick, in which they first gave false feedback based solely on a=0 to move them away from the peak, and then gave correct feedback. There were 7 blocks; baseline (no feedback), 2 training blocks with a=0, a post-training/new baseline block, 2 blocks of correct feedback, and a final post training assessment.

Results

Figure 3. Individual data from 4/9 participants. The solid lines are data from the a=0 blocks; the dotted lines from the correct feedback blocks. Note performance in Block 4 is evaluated twice; once with respect to the a=0 variable as post-training and once with respect to the correct answer as a new baseline.
On average the information space analysis did a good job accounting for how people's performance changed over blocks. People first began tuning into a=0 as information then switched towards the better variable with the correct feedback. This double step training is a nice trick for this kind of work.

The authors note a few conclusions
  1. The chosen information space was appropriate because performance never went above the maximum level of performance that variable could support at various places in the space. That's an important diagnostic check on your space. 
  2. Different participants at different points in time seemed to be living in quite a wide range of places in the space, supporting treating this space as continuous. Recall, though, from my last post, that I suspect this might be an artefact of the analysis and that behaviour that looks continuous might reflect various levels of ability to discriminate and use discrete variables. 
  3. There was a high correlation between the trajectory of learning and the form of the information-for-learning, convergence information vector field. The authors argue this supports the hypothesis that information for learning exists and is used as they propose.
With regards to point 3, one feature of the individual plots is that learning is not always in the direction of the vector field. Participant 1 starts heading in the wrong way (from block 4 to 5) and then reverses; P2 hits the invariant in Block 6 but then regresses away, and P4 spends the two training blocks 5 and 6 heading in the wrong direction before ending up in the right place in the no feedback assessment block 7. 

Assessment

The Information Space

First, the authors combine the two most useful discrete variables identified in past work (the 1st and 3rd moments of inertia) into a single dimensional higher order combination expressed by the equation above. This works out ok (the data behave themselves as described above) but to my mind (and to Michaels et al) this still begs the question of whether this higher order relation exists in the dynamic touch array. That remains an important question that requires independent evidence.

Convergence Information

Second, the individual learning trajectories were not all always following the convergence information vector field, although overall and on average the correlations between learning and the field were high. The individual level is important here, though; people don't perceive and act on average. 

In the General Discussion, the authors do work through an analysis on a candidate convergence information variable, although they do not commit themselves to this being the actual information. Convergence information must be detectable, and it should specify how to change variable use. The latter means the information is defined over time (here over multiple trials); you need at least two points to get a direction in the information space. The authors propose that the relation between the error feedback and the mass of the rods being wielded and show it produces the right kind of vector field. But the correlation of this variable to performance is merely 0.396; significant, but hardly specification. They conclude their candidate is not the right variable.

While the tutorial on proposing and evaluating a candidate convergence variable is good, the fact their sensible swing doesn't work is interesting. First, it shows that even though the error-mass relation produces an appropriate vector field, there's no reason to think this relation is specified, i.e. it's not obviously a real part. Because this analysis is not moored to a mechanistic mindset, it can easily produce false starts. That said, they show how to check it, and that is excellent. 

Other Learning Processes

They then discuss alternative learning processes; trial and error, and gradient descent. I still suspect that something like this is going on, and they haven't ruled it out yet. This would be the idea that instead of learning being driven by a continuously specified convergence information vector field, errors would drive people to bounce around the region of the information space at random and try to figure out which direction reduces the error best. This didn't best explain performance in the current study, although that analysis compared the gradient error to their candidate convergence information variable which didn't seem to be working anyway. 

They also note that their account might easily get recast into a cue-combination story, which reinforces my concern that their information spaces are just this. Cue combination lets you combine the variables any way you like because the combination happens in the head; for ecological types, the higher order relation between variables must be specified in an array in order to be used. 

Some Final Thoughts

I still think that the basic approach is a good first swing. I am not yet convinced that they are building their information spaces in the most mechanistic, real-parts way, but there's nothing in their process to stop someone from doing this. I think it would be a good addition. 

There were some slippery moments in this paper. At one point they seem to admit a cue-combination reframing of their analysis is possible, then they explicitly say theirs is not a cue combination theory. They end up ruling out the error-mass relation as convergence information but then call identifying it one of the strengths of the paper and don't seem to notice that they have no obvious way to implement their better variable (the distance from the current to ideal information variable). 

But overall the authors are careful and rigorous, and treat this as the first swing that it is. They showcase multiple ways to analyse and evaluate data from this framework, which when used together can give you a lot of information about whether or not your candidate variables are explaining anything. They had a solid attempt at handling most of my concerns and so I think this framework remains promising enough to continue to explore. 

References
Michaels, C. F., Arzamarski, R., Isenhower, R. W., & Jacobs, D. M. (2008). Direct learning in dynamic touch. Journal of Experimental Psychology: Human Perception and Performance, 34(4), 944.

5 comments:

  1. The Direct Learning hypothesis is almost certainly wrong.

    The problem that ought to be most obvious to you, is that it is based on the same metaphysical assumptions as is the Representationalism that you reject.

    I won't say more at the moment, because you usually disagree with my occasional posts here -- perhaps because I have rejected those metaphysical assumptions.

    ReplyDelete
  2. Thanks for reviewing this work and providing your assessment. Jeff Wagman and I had some fun applying these ideas to the study of auditory perception of object length. We thought the Michaels et al. explanation of the 1-d information space was sufficient for constructing information spaces and running the analyses ourselves with our own stimuli sets and data. I am in agreement with you that these attempts are good “first swings” and more work and attention is required. I also agree that perhaps the apparent continuous motion of learners through information spaces might be an artifact of the analysis (as communicated in your last post), but of course this ‘movement’ is typically discretized by block in the experimental design and subsequent analytic protocol. In my opinion, this is a limitation of the current analytic protocol, but I think progress could be made if this was innovated. What is your assessment/idea for building information spaces in a more ‘mechanistic, real-parts way’? From your last post, it looked like you were skeptical of implementing (and innovating) an information space analysis but here it looks like you might have thoughts on how to improve it.

    -Drew Abney

    ReplyDelete
    Replies
    1. Hi Drew

      The mechanistic approach is primarily about not modelling anything until you have independent empirical evidence that the components in your model are, in fact, players in the mechanism under study. Right now the direct learning stuff is effectively model fitting data to a hypothetical description of a continuous space for which there is no independent evidence. Like I say, you have to start somewhere, but that functional analysis won't get you to mechanism.

      What I do when I want to identify information being used is a perturbation study (post on the general idea, my current specific empirical focus). I run those studies to identify variables and map their characteristics, and only then would I try to build an information space out of them.

      Now, my current working digging into information for coordination is revealing some complicated findings and I have a lot to do yet to align that work with the direct learning analyses. But long story short, Michaels & Jacobs dive in with some information space and fit data, which is a functional analysis. They should do a ton of work first to identify components, then build the information space out of those and test that directly. It is bloody hard though!

      Delete
  3. I had very mixed feelings about the Direct Learning thing for a while. Then, at a meeting, I saw Michaels give a talk about it. (2011 in Brazil maybe?) Something clicked, and I have loved the idea since.

    That is not to say that I am in love with any particular experimental paradigm.

    Let's say you believe there is information in the environment that specifies opportunities for behavior, and that organisms can attune to such information over developmental and evolutionary time scales. It follows that in many situations there should exist (in the environment) information that specifies how to correct mistakes in attunement. That is a level more complex, and so it is harder to think about, and harder to set up good experiments about, but it is a solid proposal.

    I suspect that if Ecological Psychology had a much wider net of successes on the first order (perception of affordances), it would be easier to find ideal opportunities for exploring that second order (perception of how to correct affordance-perception errors). As it is, the rod-weilding work is one of the few super-well-explored paradigms, and I think that is the main justification for using it in the initial test. The results, while not perfect, seem good enough to justify much future work.

    ReplyDelete
    Replies
    1. I basically agree. I'm basically on board that it's a hypothesis worth exploring. But it does make some strong claims about information spaces that need a lot more work to justify. That's ok though, we had to start somewhere and they were, at the very least, crystal clear at all times about what they were doing. Full points there!

      Delete