Pages

Tuesday, 9 May 2023

The Spatial and Temporal Structure of Reach-to-Grasp Movements

This post is part of my current series where I am developing a research programme to formally connect affordances and motor abundance analyses (see the intro post here). The first thing I need is a task that has a formal affordance analysis; my chosen task is reaching-to-grasp (sometimes called prehension). I'm pivoting to this task from throwing for a few reasons. First, I need a simpler task. The logistics of data collection and analysis for throwing tasks are large and I just don't have the resources right now; I do have the kit for these studies. Second, I want to expand my own skill set to a new domain, and reach-to-grasp is another venerable topic. Finally, it means I get to spend time really grappling with other work by Geoff Bingham, who was my PhD supervisor and who remains the person who's work I admire the most. 

This post is about the basic form of the reach-to-grasp movements studied in the lab, what some of the key measures are, and the fact these are sensitive to task demands (suggesting they are emerging from an active perception-action loop). In the next post I will work through the initial task analysis that leads us to candidate affordance properties. 

Reach-to-Grasp Movements

We have hands with opposable thumbs mounted on extendable arms, so one of the things we do a lot is reach out and grasp objects. There are of course many ways to do this, but the basic form of the movement is captured by the sort of task in the picture. 

There is an object, with a certain size and location relative to us. We begin with our hand near us, with fingers and thumbs together. This forms a pincer between the fingers and thumb (this is what 'opposable' means). We have to size this gap (called the aperture) to suit the object and then steer that shaped hand to enclose the object. We then need to close the gap to contact the object. We then usually pick it up; a stable lift needs the force vector between the fingers and thumb to be placed across the object, so where we land our fingers matters.

You could just open your hands as wide as they'll go and move to enclose the object, but of course we don't work like this. Well-functioning reach-to-grasp movements have a consistent spatial and temporal structure that reflects the online control of the action in the context. 

The hand begins with the fingers and thumb together at some distance from a target with a specific set of properties. The hand then begins to move, the finger and thumb start to separate and form the grip aperture. That aperture reaches a maximum, as does the speed; then the hand is decelerated to come in for a landing. The hand stops, with the grip aperture smaller than it's peak but still bigger than the object. Then the fingers are landed, and the object has been grasped. So, in order, the things that get measured:
  • The acceleration profile (the way in which speed changes over the reach). This typically is a smooth bell-shaped profile, where speed increases from zero to a maximum and then back to zero. We measure the magnitude of Peak Speed, and the timing; when in the reach does it happen? 
  • The grip aperture size. This aperture goes from zero to a Maximum Grip Aperture (MGA) then down to a Terminal Grip Aperture (TGA; the size when the hand stops moving) and then a Final Grip Aperture (FGA; when the object is grasped). The timing of the MGA can also be noted.
  • The grip aperture orientation (Pitch, Roll, and Yaw angles).
  • Total Movement Time (MT)
  • Path Length (distance travelled, compared to the straight line distance; usually only interesting if there are obstacles and therefore path options). 
Of course, the exact details vary from trial to trial. This is due to redundancy, in part, but also, importantly, task details. Like all movements, reach-to-grasp movements are exquisitely scaled to the current demands, which means they are being created online as a function of information about those demands. These movements therefore have a distinct but task sensitive spatiotemporal structure, and we can use variation in that structure as a function of task changes to probe what changes matter and why. The next post will detail the task dynamical analysis that shapes those experiments.

No comments:

Post a Comment