Pages

Friday, 13 April 2012

Patient DF uses haptics, not intact visual perception-for-action to reach for objects

Before functional neuroimaging techniques like PET and fMRI became common, what we knew about which parts of the brain did what came from neuropsychology. This is the study of patients with specific injuries to the brain, and the basic logic of the field is that if you have a patient with a lesion in area A who can't do task 1, then area A is involved in performing task 1. It gets a little more complicated than this, as you search for double dissociations, etc, but this is essentially it.

A surprising amount of what we think we know about the brain comes from neuropsychology; famous case studies such as HM have informed theories of memory so that they include short and long term storage, which are separable, and so on. These case studies can have a profound effect on research; my favourite story, though, was about a memory researcher who had a skiing accident and temporarily developed retrograde amnesia - he couldn't remember anything except that there was this guy in Connecticut (HM) who couldn't remember things either!

I always enjoyed classes in neuropsychology; the case studies are always fascinating. But they are deeply limited in what they can actually tell us about the brain. First, they are typically single patient case studies, which restricts how general the conclusions are. Second, they are data from damaged brains; the fairly linear assumption that some localised function has been subtracted out is simply not true, and the damage will have had complex effects on distributed functional networks.Third, the damage is never straight-forward, because these almost all come from accidents or strokes (HM's surgery being a rare example of more detail being known). This has not stopped the field being very excited by these cases, though, and from basing a lot of theory on these patterns of deficits.

In movement research, the most famous neuropsychology case study is Patient DF She suffered bilateral damage along the ventral stream of visual processing (James et al, 2003). The effect was visual form agnosia: she is able to control her actions with respect to objects, but cannot describe or recognise these objects verbally. Crucially, her accident did not damage her parietal lobe; specifically, the dorsal stream of visual processing was left intact. These two streams are well defined anatomical pathways leading out of primary visual cortex, and were first described by Ungerleider & Mishkin, 1982). DF's pattern of deficits led Mel Goodale and David Milner (Goodale & Milner, 1992) to suggest functional roles for these streams. The ventral stream, they suggested, was for perception - things like object and scene recognition. The dorsal stream, in contrast, was for perception-for-action, and used visual information for the online control of action. This perception-action hypothesis has been hugely dominant in the field, and the theory rests heavily on DF's shoulders.

Recently, Thomas Schenk (2012a) published some data which claims to show that DF's visually guided reaching is not normal if she doesn't have access to haptic feedback about the object. His data suggests that the only reason she succeeds at reaching while failing judgment tasks is that haptic information is only normally available in the former case. If correct, this is actually quite a shot across the bow of the perception vs perception-for-action work; naturally Goodale and Milner don't buy it, and have published a reply to which Schenk has then replied.

An invitation
I like seeing these arguments happen in the literature; but to be honest, the time scale is too slow. Schenk publishes, then Milner et al get to reply and Schenk gets right of reply to that. They may or may not iterate again and it's always left as 'we agree to disagree'. But these critiques have answers, and I think a blog comment feed might be the right place to work through the various cycles of suggestions and rebuttals until the obviously wrong things have been weeded out. It would also provide a place for other interested parties to weigh in. So if Schenk, Milner and Goodale (and anyone else!) feel like using the comments for this post or another made to purpose to bang around ideas until an obvious experiment or analysis pops out, please feel free!

Schenk (2012a): Patient DF and haptic feedback
The classic result from DF that inspired the Goodale and Milner account has two parts (a dissociation). When DF is asked to reach for an object, she can do so well and she shows appropriate pre-shaping of her hand which scales correctly with the size of the object. This suggests she has intact visual perception-for-action. However, if you ask her to judge the size of the object without reaching for it, she cannot do it; she has selectively impaired visual perception. Schenk ran the following experiments on DF and to demonstrate that her unimpaired reaching is not because of intact visual perception-for-action, but rather because of haptic feedback from real objects. 

'Perception': these are the tasks DF typically fails due to her visual form agnosia, and she fails them here too.
  1. Size discrimination: Choose the largest of two objects
  2. Manual estimation: Judge the size of an object by shaping your hand correctly
'Perception-for-action': Schenk used a mirror placed at 45° to produce straightforward virtual objects (mirrors preserve all the optics of the thing they reflect and people see what these optics specify). You look at the reflection and reach to the visually specified location; there can either be an object there or not. There were three object sizes, presented in random orders. DF is typically able to succeed in grasping tasks.
  1. Standard grasping: Reach to grasp an object in the mirror and there is actually an object there (vision + haptic information match)
  2. Grasping without haptic feedback: Reach for the reflection but there is no actual object there.
  3. Grasping with intermittent haptic feedback: there was an object present on half the trials; these were scattered randomly throughout the session and a light cued participants when the object would be present.
  4. Grasping with dissociated positions: participants saw an object in the middle location, were asked to reach for an object at the far position, and there was a real object there.
The data revealed that DF's performance was only normal when she had haptic feedback in the session (tasks 3 & 5). Whenever she had no haptic information about the object, her reaching to grasp was abnormal (Figure 1).
Figure 1. Grip performance for controls (open circles) and DF (red diamonds).
The fact that the intermittent haptic feedback helped DF produce correct grip scaling in Task 5 on trials with no object reflects the fact that haptic calibration of visually guided reaching has a dynamic - it lasts for some time and only requires intermittent topping up (Bingham, Coats & Mon-Williams, 2007).These data suggest that DF does not have preserved vision-for-action; rather, she has preserved haptic perception and is relying on this to scale her hand to the object size.

Milner, Ganel & Goodale (2012)
Unsurprisingly, Milner et al (2012) do not agree that these data cast doubt on the perception-action hypothesis about the function of the dorsal and ventral streams. They make the following criticisms:
  1. They suggest that "so-called 'haptic feedback'" (to quote the paper) from trial n could only inform a reach on trial n+1 if the objects were the same size in both trials; object size was randomised across trials, however.
  2. They then claim that Schenk's interpretation means he thinks DF's reaches are prepared on the basis of previous haptic, rather than current visual information. Therefore, they suggest, Schenk must make 'the inescapable prediction' that a reach on trial n+1 should be appropriate for what happened on trial n, regardless of what is presented on trial n+1.They allow that there may be some 'minor intrusion' of haptic information from previous trials.
They suggest that the cases where DF failed are, in effect, pantomimed reaches. It's as if she has to mime a reach when there's no object, and she is poor at pantomimed actions due to her visual form agnosia (Goodale et al, 1994).

Schenk (2012b)
Again unsurprisingly, Schenk (2012b) does not agree with Milner et al's evaluation.
  1. He claims that the Milner et al critique assumes that prehension requires the visual computation of an object's size. He then cites recent work by Smeets & Brenner (1999) who claimed to show that prehension involves the independent targeting of the thumb and forefinger, and thus you don't need object size.
  2. He then suggests that DF is generally able to reach successfully because she has access to the necessary egocentric information (in hand-centred coordinates) about the location of object edges. This information requires regular calibration (Bingham et al, 2007) to remain accurate.
  3. He therefore predicts that if DF has egocentric information about the object, and this information has been calibrated recently, she can reach successfully, otherwise she fails. His 2012a data then support this pattern. 
  4. Regarding the pantomime problem: Schenk tested this in Task 5 with the trials with no objects. DF knew there would be no object on these trials (the light cue) but still produced normal reaches because of the calibration on other trials.
Analysis
There is a lot that is weird about the replies. Milner et al make some odd claims, and Schenk goes to strange places in his defence. Let's address those first.

1. "So-called 'haptic feedback'"
Milner et al want to keep claiming that DF reaches on the basis of current visual information, and not on the basis of previous haptic information. But there's a problem for them - this is the claim Schenk's data actually refutes! So they make an odd move, and simply claim that earlier haptic information does not affect reaches, and that even if it could, it won't here because the size changes from trial to trial.

However, Coats, Bingham & Mon-Williams (2008) have demonstrated (using a mirror rig similar to Schenk's) that if you systematically change the size of the grasped object while leaving the visual object the same size, people happily recalibrate their reach actions and change their grip apertures. Bingham et al (2007) have also shown that even occasional calibration allows stable reach behaviour to persist; calibration lasts some time. So even when the visual size remains unchanged, people's grip behaviour reflects the haptic calibration of the visual perception of size and if this calibration changes, so does grip.

Milner et al's second point - that haptic feedback can't help because the object size changes randomly - is actually addressed by Schenk's control data, which shows that neurologically intact people can happily scale their grips appropriately under these conditions (albiet slightly more noisily).

2. Reaching and the need for visual size 
Schenk centres his reply on the idea that Milner et al assume you need to compute (or perceive) the size of objects in order to scale your grasp. He then cites Smeets & Brenner (1999) who claim that instead, you simply control your thumb and forefinger independently and bring them into alignment with the edges of the object.

The problem here is that Smeets & Brenner's work is highly controversial, and in fact more recent work from Mon-Williams & Bingham (2011) tested the predictions of this account in great detail and found no support for this claim. Instead, they showed that the unit of control is an opposition axis (Iberall, Bingham & Arbib, 1986). This is the space between the thumb and forefinger, and Mon-Williams & Bingham (2011) demonstrated that prehension is about aligning this space with the object. You do still therefore need to perceive object size, specifically the maximum object extent. I'll blog this paper in more detail sometime, it is a master class in affordance research.

Conclusions
I think Schenk had it basically right in the first paper; the explanation for his data is that in tasks 3 and 5, DF has sufficient access to haptic information about the object's size to allow her to bypass her visual perceptual deficit. She can therefore successfully reach to grasp. In all other tasks, she can't go round the problem and she fails. This suggests that her visual deficit is not simply restricted to 'perception'; the visual system involves both anatomical streams working in concert and these are not functionally independent of each other. What Schenk needs to do is treat haptic information as perceptual information for size in it's own right, not simply feedback or an 'egocentric cue'. DF has unimpaired access to this information and when it's available, she can reach-to-grasp.

ResearchBlogging.orgReferences
Bingham, G. P., Coats, R., & Mon-Williams, M. (2007). Natural prehension in trials without haptic feedback but only when calibration is allowed. Neuropsychologia, 45, 288 –294. Download

Coats, R., Bingham, G.P. & Mon-Williams, M. (2008). Calibrating grasp size and reach distance: Interactions reveal integral organization in reaching-to-grasp movements. Experimental Brain Research, 189, 211-220.  Download

Goodale, M. A., Jakobson, L. S., & Keillor, J.M. (1994). Differences in the visualcontrol of pantomimed and natural grasping movements. Neuropsychologia, 32(1), 1159-1178.

Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15(1), 20 –25. Download

Iberall, T., Bingham, G. P., & Arbib, M. A. (1986). Opposition space as a structuring concept for the analysis of skilled hand movements. In: Heuer H, From C (eds) Experimental Brain Research Series 15. Springer, Berlin, pp 158–173. Download

James, T.W., Culham, J., Humphrey, G.K., Milner, A.D, & Goodale, M.A. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: an fMRI study. Brain 126, 2463–2475. Download 

Milner, A., Ganel, T., & Goodale, M. (2012). Does grasping in patient D.F. depend on vision? Trends in Cognitive Sciences DOI: 10.1016/j.tics.2012.03.004

Mon-Williams, M. & Bingham, G.P. (2011). Discovering affordances that determine the spatial structure of reach-to-grasp movements. Experimental Brain Research, 211(1), 145-160.  Download

Schenk, T. (2012a). No Dissociation between Perception and Action in Patient DF When Haptic Feedback is Withdrawn. Journal of Neuroscience, 32 (6), 2013-2017 DOI: 10.1523/JNEUROSCI.3413-11.2012

Schenk, T. (2012b). Response to Milner et al.: Grasping uses vision and haptic feedback Trends in Cognitive Sciences DOI: 10.1016/j.tics.2012.03.006

Smeets, J.B.J., & Brenner, E. (1999). A new view on grasping. Motor Control, 3, 237-231. Download

Ungerleider, L.G., & Mishkin, M. (1982). Two cortical visual systems. In Analysis of Visual Behavior (Ingle DJ, Goodale MA, Mansfield RJ, eds). Cambridge, MA: MIT.

17 comments:

  1. "DF has sufficient access to haptic information about the object's size to allow her to bypass her visual perceptual deficit"

    A colleague and I will hopefully have a J Neurosci journal club article about this soon - I will keep the comments section updated on the progress of this, since we have some new and more detailed figures from Schenk. But first I'll try to flesh out the logic from the Milner et al. reply..

    what this boils down to is how does DF know what she should do on a given trial. Sure, she has build up useful calibrations over the previous trials when she has haptics. But since peak grip aperture occurs before haptic feedback on trial n, a direct mechanism for using haptic feedback to distinguish between the objects would only work on trial n+1. This doesn't work with randomly ordered trials - the haptics of the previous trial do not predict the current trial.

    In short, in all of Schenk's tasks where she succeeds in scaling her grip aperture, DF must still have some way of visually distinguishing between the 3 objects presented to her. Of course, haptics are helping her just as much as vision in this task, but the 2 visual streams story isn't quite as straw man as you make it out to be. The dorsal and ventral streams may be functionally segregated (and there is a pretty whopping pile of evidence that this is the case), but no one would deny that the streams are working in concert for the vast majority of tasks. And, in a similar vein, no one would deny that an individual will use whatever information is available to them - especially DF who clearly has compromised vision (i.e., no ventral stream back-up) and is probably especially reliant on non-visual cues for controlling.

    ReplyDelete
    Replies
    1. This got caught in the spam filter, sorry.

      In short, in all of Schenk's tasks where she succeeds in scaling her grip aperture, DF must still have some way of visually distinguishing between the 3 objects presented to her.
      Good point. But then why does she fail when there's no haptic feedback at all, if she can visually discriminate the object sizes?

      Delete
    2. I emailed Schenk about this, and he replied

      They argued that we have good evidence for DF's perceptual inability to discriminate visual sizes. They point out that haptic calibration might allow us to adjust the grasp action to a given object. But, such an improved action-object match will only improve grasping performance if the actor knows which grasp has to be selected for which object and this requires that the actor is able to discriminate between the different objects before the grasping action is executed. In my experiment three different objects have been used in an intermingled sequence. Milner and colleagues therefore ask how DF can benefit from a better grasp-object match if she is unable to discriminate visually between objects? This is the challenge.

      My answer is that she cannot discriminate visual size and therefore cannot discriminate the objects. I agree with Milner and colleagues that calibrating the grasp (size) function will not provide great benefit unless you know the critical variable "size". In fact I would argue you cannot calibrate that function in the first place unless you can perceive/process visual size. Instead, I argue that DF is not using size to guide or calibrate her grasp she is using the egocentric position of the final contact points and employs haptic feedback to improve the relationship between the perceived egocentric position and the executed finger and thumb movements. We know that DF can perceive egocentric positions (Schenk, 2006) and therefore this seems a reasonable assumption.


      So in essence, he says DF is not reaching like a normal person; she is using different information (the position matching strategy Smeets & Brenner propose but that doesn't seem to hold for normal people) to cope with the fact that she can't visually discriminate the objects at all. Not a crazy answer, although not directly tested I don't think.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Hmm...a 'brave attempt' indeed...it must have taken a lot of courage to delete Dr. Buckingham's comment.

    ReplyDelete
    Replies
    1. I didn't delete it, Gav did ('deleted by the author' means 'author of the comment'). I don't delete comments unless they're spam.

      Delete
    2. Yes. Gavin assures me it didn't take much courage at all on his part to delete his own comment. He just sort of, did it. My bad, Andrew!

      Delete
  4. It will take me a bit to digest all this, but first I wanted to mention:

    It is horrible that anyone allowed the labels: Perception and Perception-for-action.

    At the least, they need to be parallel. So far as I can tell, this would make the first 'stream' Perception-for-talking.

    I know I have seen them labeled the 'what' and the 'where' pathway, but I don't think that clarifies things much more.

    ReplyDelete
    Replies
    1. For years I was confused at VSS; I'd go to the perception-action sessions and not understand why it was all two-visual systems stuff :)

      The dorsal and ventral streams are anatomical facts; their functions are entirely up for grabs and the what/where distinction is another flavour. I would love to collect a database of how these functions have been defined, it's very loose and I bet the inter-study variability is actually huge.

      Delete
    2. As far as I can tell, the labels 'Perception' and 'Perception-for-action' are entirely Andrew's creation...

      Andrew is right though about there being an awful lot of these dorsal/ventral stories about.

      Milner and Goodale's 2-vis stream (the dorsal/ventral dichotomy with easily the most direct experimental evidence) is 'vision for (conscious) perception' and 'vision for action'. That is all - it is a theory limited to visual feedback, and not some semantic triviality about 'perception'. Any other modality is up for grabs (although Dijkerman has an interesting target article about the possibility of a similar functional separation in the haptic system also).

      Delete
    3. The 'vision for (conscious) perception' label only makes sense to me if we are suddenly Skinnerian Behaviorists. Skinner argued that 'consciousness' (what I might call 'self-consciousness') was a certain subset of 'verbal behavior'.

      So far as I can tell, in all this 2-stream research, the entire issue of 'conscious' also hinges on the ability to produce verbal self reports. Thus, if the researchers would simply admit that, for them, 'verbal behavior = consciousness', I am not sure I would agree, but at least I would understand their logic. However, these researchers also seem to cling to all the trappings of cognitive psychology, and this has always struck me as a clearly untenable position.

      P.S. One effect of my reading George Orwell is a suspicion of the mysteriousness Latin words seem to add to discussions. Would we be happy with the labels 'seeing for speaking' vs. 'seeing for doing'. This would at least make obvious the implications that 'speaking' is not a form of 'doing', and that there is an underlying something-or-other that is rightly called 'seeing' either way.

      Delete
    4. If by 'entirely my creation' you mean 'Andrew didn't always add the word vision in' then I agree :)

      Delete
  5. I would happily acknowledge that the 'goal' of conscious perception is probably related to the output of verbal behaviour (or, at the very least, communication in general) - this to me seems more likely than conscious perception for, say, selecting the correct stone to throw.

    But I cannot see the terms 'conscious perception' and 'verbal communication' as being interchangeable (nor can I see how that is the logical conclusion that must be drawn from the 2 visual streams ideas). Notwithstanding the fact that conscious perception can be probed non-verbally (albeit still communicatively), can't we introspect huge amounts of conscious detail? The problem, such as it is, is that the state of the art only allow us to determine consciousness in a yes/no way.

    ReplyDelete
  6. Gavin,
    I am merely trying to read the logic inherent in the experiment. The logic seems to go like this:

    1. If person X can reach out, grab an object, hold it correctly, and use it to do something, then we do not necessarily have evidence that person X is conscious of the object, nor evidence that they have perceived it. They might just be doing 'vision for action'.

    2. If person Y can do none of those things, and is merely able to give the correct verbal response when the object is present ('label it', 'tact it', 'describe it', etc., by vibrating their vocal cords, using hand signs, or other means), then person Y is conscious of it. They have used 'vision for perception'.

    So far as I can tell, that is the logic. Not my logic, the logic of the people doing these experiments and applying these labels. Right or wrong, it is inherent in the logic of these studies that some types of behaviors-directed-at-objects are relevant to discussions of conscious, while other behaviors-directed-at-objects are not.

    On what basis do the researchers distinguish the two types of behaviors? No basis is given or justified. The distinction is simply asserted. Could such a distinction be justified within a cognitive framework? Maybe, but I can't see how. If you think that consciousness is nothing other than the types of brain activity that characteristically lead to verbal behavior, then you are some variety of behaviorist.

    ReplyDelete
  7. Eric, if only it were that sophisticated! The logic is mostly that we seem to have visual experiences that aren't about the online control of action; and, if you break the ventral stream you break this but not the online control. Therefore they are different.

    This is why there are so many functional explanations about the dorsal and ventral stream; no one's doing theory, they're just fitting data.

    ReplyDelete
  8. Interesting conversation. Here’s an anonymous Internet poster’s perspective. The terms “vision for action” and “vision for perception” are nonsensical terms and amount to a reductionist account of perception. The “two modes of vision” model is steeped in representationalism (i.e. associations are mediated by a representation of the object and not direct dealings with the actual object) and introspective first-personal account. To put another way, it doesn’t allow for pre-reflective self-consciousness, direct perception without representation, and affordances for action possibility, and sensorimotor responses that occur below introspection and always within a context (i.e. not an artificial setting).

    ReplyDelete
  9. I'm probably way to late for this discussion but since there is a suggestion of an affordance argument plus small distance visuo-motor tasks. I wondered if DF has difficulties in walking or running down a street for example?

    ReplyDelete