Tuesday, 13 December 2022

Trip Report from the Uncontrolled Manifold

I've spent the past few months getting a new paper to the 'complete first draft' stage (you can find a copy here in the meantime; it's still got some work to do though). It's about affordances, using targeted long-distance throwing as the task, and it's my first dip into the world of the uncontrolled manifold. I collected this data over five years ago, and it's been deeply satisfying to actually use it after all this time.

Part of what's taken so long is that I've had to learn the details of the uncontrolled manifold analysis. I blogged some about it here and here but this was the year I finally had the time and data to actually get into the maths. I still really like it as an approach to analysing human movement, but learning the details and trying to figure out how to get affordances into it has raised a lot of interesting questions about how it gets used right now and what this all implies for how we think movement is controlled. I'm raising a bunch of these issues in the paper but I wanted to sketch some out here for comment.

Broadly (and this shouldn't be a surprise to anyone really) I've realised that UCM is only a method, not a theory, and it's therefore not able to serve as a 'guide to discovery' about movement control. However, it's being used as if it can, and to be honest I was quite shocked at how carelessly it's being used in the literature. 

A Quick UCM Primer

Here's the basic idea. Movement systems are redundant: there are always more biomechanical degrees of freedom available than are required to solve any given movement task. This redundancy is a feature, not a bug: it means we always have a set of movement solutions available which provides critical flexibility in a busy world. This fact comes with an implication: some movement variability will keep you within the set of actual solutions (a subspace called the uncontrolled manifold), while some will take you out of that set (the subspace orthogonal to the uncontrolled manifold). UCM is a mathematical analysis of movement variability into these two components, V(UCM) and V(ORT). 

The analysis therefore needs two things fed into it: variable movement data from the movement system you are currently studying (called the elemental variables) and a set of variables representing what that movement system is trying to produce (called the performance variables). You decompose variability in the elemental variables with respect to the performance variables, and if most of the variability lives in the uncontrolled manifold subspace, this is evidence that the elemental variables are being controlled as a synergy who's goal is the production of that performance variable. 

This analysis therefore lets you test hypotheses about how a given action is controlled. You identify a candidate synergy of elemental variables, decompose the variability in the operation of that system with respect to a candidate performance variable, and if the index of synergy - V(UCM)-V(ORT)/V(TOT) - is >0 you have evidence that you have found the right synergy and the right performance variables. 

I like this analysis because it was specifically designed to embrace the complexity of the operation of a redundant synergy; in other words, it is trying very hard to be a valid way of analysing the movement system and I fully approve. Here's my concern, however: the entire process is completely underconstrained unless this method is used to test hypotheses from theory, and to a disturbing extent it is not used this way. All of the various decision points are researcher degrees of freedom, and there are few if any constraints on how these decisions are made. Worse, the reporting in many papers never explicitly justifies any of the decisions. This is a recipe for a replication crisis, at the very least.

The Problem of Selecting Elemental Variables

UCM analysis embodies the hypothesis that redundant movement systems become controllable by being softly assembled into task-specific synergies (Latash, 2008). A synergy is a particular kind of arrangement of dynamical properties (elemental variables) in which they are coupled together so they can be driven as a single device. The goal of skill acquisition is to learn how to softly assemble a synergy who's behaviour lives within the uncontrolled manifold for a given task. You have evidence that such a synergy is operating if most of the variability in the operation of that synergy is V(UCM) (see the index of synergy). 

For my throwing study, we measured the joint angles of the shoulder-elbow-wrist system, with the angles expressed in an internal frame of reference. We did this basically because that's what everyone else does, and because the biomechanics of the throwing action provides some useful constraints here, but this decision is not fully locked down and comes with a lot of implications.

First, the frame of reference. Sternad, Park, Müller & Hogan (2010) used UCM to analyse joint angle data from the same set of elemental variables expressed in either an internal (body centred) or external frame of reference. The resulting decompositions were completely different! They use this result to advocate for their Tolerance-Noise-Covariation (TNC) analysis, which constrains the analysis using simulations of task dynamics (I like this analysis too and I am working to implement it for my throwing task; more on that later). 

Second, which joint angles to use? The goal is to identify the synergy you think the movement system has softly assembled itself into in order to solve the task demands (Bingham (1988) and therefore I have called these task-specific devices in the past). Biomechanists studying throwing tend to focus on the shoulder-elbow-wrist system because it is patently the case that it's required for a throw; but sometimes they include the fingers, and sometimes they include lower limb joint angles too because arms don't just hang in space. No one ever justifies their selection. Of course, you could use UCM analysis to empirically identify the synergy by defining several different ones and see which one works best, although this will depend on having the right performance variables and that selection process is less constrained that this one. 

To summarise: biomechanics helps here, but the selection of elemental variables (which at this point I will note are the same as effectivities) is not sufficiently constrained by the UCM method and this shows up in practice. 

The Problem of Selecting Performance Variables

How appropriate a synergy of elemental variables is depends on whether it's operation produces performance variables that solve the task.  Selecting good performance variables means you need to know what the task is, and what specifically the action system needs to be doing in order to solve it. Defining your task is, surprisingly, a very difficult thing to do with no clear solution (all my work is about developing the ecological task-dynamical analysis as a solution; I have a detailed review of the problem and solution in Leach, Kolokotroni & Leach, 2020). 

Given this basic problem with task definitions, it's not actually surprising to see huge variability in which performance variables people choose. Throwing work has used shoulder position, hand position, hand orientation, finger position, finger orientation, and hand velocity; different papers use different ones, and measure them in both internal and external frames of reference (note: this leads to a wide variety of results, none of which build or develop on each other). What worries me, however, is that the selection is rarely justified at all; at best, there will be a sentence along the lines of 'well it seems like throwing probably needs this variable to go well' which is precisely the sort of intuition based task analysis that led to the problem analysed in Leach et al. 

Ecologically, a task is defined by the perception of affordances. The point of my paper is to see if my task-dynamical affordance analysis from Wilson, Weightman, Bingham & Zhu (2016) could help constrain this process. That analysis defined the affordance of the target to be hit as a set of release parameters (release angle, speed, and height, all defined in an external frame of reference) and I used the mean release parameters for a given target distance as three of the performance variables we tested as my first attempt to use task-dynamical affordances as a constraint on UCM analysis. Spoiler, it sort of works (although it is, of course, complicated)! We compared these to hand position, orientation, and velocity defined in either an internal or external frame of reference. 

Here's a bunch of questions I came up with as I interacted with doing these analyses:
  1. We had nine performance variables that were intuitively plausible options. The UCM analysis results helped us rule some in and out, but it's wild that there are so many options and that we could have simply written a paper where we tested and reported a couple we chose for no clear reason. This researcher degree of freedom issue in practice and reporting is a recipe for a replication crisis. 
  2. It was almost impossible to identify what counted as a meaningful index of synergy. Across the trial, almost all the variables produced indexes that were >0, but some were clearly higher than others and some oscillated close to 0. In order to say I have evidence that my synergy of elemental variables is organised so as to produce that performance variable, is >0 enough or do I need some notion of 'significantly >0'? I definitely think the latter, and I imposed a threshold of 0.5 in my analysis, but I could find no way to justify that decision beyond it seeming about right based on the graphs (Singh et al, 2022, set a threshold of 0.15 on the z-transformed index, which made sense but again had no justification). No one else even mentioned it.
  3. Relatedly, most papers just look to see if the index of synergy varies either between conditions or between performance variables. The former assumes you have the right elemental and performance variables and is looking to see how the task manipulation affects the synergy; the latter assumes you have the right elemental variables and is testing to see which performance variable it seems to be working to produce. These are sensible approaches, but it reveals another thing I learned: the same synergy can produce indexes of synergy >0 for multiple performance variables! Some will be higher than others, but again, without a notion of 'significantly >0' interpreting this gets complicated, plus what does it mean for a given synergy to be working to control multiple performance variables? 
I also identified another issue working with these variables that is less about analysis degrees of freedom and more about unspoken assumptions about the movement control process. 

We performed the UCM analysis at each timestep of the normalised time series of joint angle data to produce a time series of V(UCM), V(ORT) and therefore of the index of synergy. For the hand position, orientation, and velocity performance variables we fed the UCM analysis the values of both the elemental and performance variables at time t. For the release parameter variables, we had to feed the UCM analysis the values of the elemental variables at time t but the values of the performance variables at time end. This is ok at this stage, but it made me realise that the way most people use UCM (the first way) embodies the hypothesis that variability in the elemental variables at time t is organised with respect to producing the values of the performance variables at time t. This may be true, but to be honest I don't think it is and I don't think it's very compatible with the equilibrium point hypothesis that also features prominently in this part of the literature. That hypothesis explicitly states that we aren't exerting control at all times in the movement, but that instead we set a new equilibrium point for the muscles around a joint and let the local limb dynamics take care of moving there. This, to be honest, is why my affordance based analysis using the release parameters makes good sense to me; the affordance defines what the synergy must produce and a good synergy is one that produces those via equilibrium point control. I need to think this one through more, but I think it's important. 

Summary

I like UCM analysis, and I think that all these researcher degrees of freedom are actually features, not bugs - it means we can use UCM to test all kinds of hypotheses about which elemental and performance variables matter. But what worries me is that it doesn't seem to get used this way. Sports science in particular seems to spend most of it's time just collecting and analysing data without thinking too hard about why it chose that data to collect and analyse. This (long!) post contains many questions and concerns I have with how the method is used. 

Of course, I have a solution, and it's the ecological approach as a source of hypotheses to test. This paper is my first swing at connecting affordances and UCM analysis, but there is much more I need to do. That said, I think applying UCM explicitly will force us to ask (and hopefully answer) important questions and engage with the best way to formally identify affordances and to get serious about effectivities, because the former defines performance variables and the latter defines elemental variables and making this link will unleash the potential of UCM analysis on our field. Once I get TNC and all the others working too, then we'll really see some progress!

No comments:

Post a Comment