Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory.
Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.
The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.
What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!
The optimal controllers Todorov & Jordan build are based on the following assumptions:
- feedback (rather than feed forward) control will be optimal and will be used whenever possible, specifically when the task has redundant solutions, when the movement is happening over longer time scales than any delays in the system, and if the behaviour of the motor system is unpredictable and therefore needs to be monitored
- motor system noise is signal dependent, e.g. faster movements are less accurate in proportion to the change in speed (this is going to be why only controlling when you have to will help stabilise a movement a great deal - every command comes with noise)
- perceptual systems are delayed and noisy, requiring an internal forward model incorporating body dynamics and efference copies of motor commands. The model is used to compute optimal control signals on the fly, and not specify a single trajectory i.e. it's not a motor plan. (This is common in motor control models, and obviously I have some issues with this all! I'd be interested to know if this kind of forward model can be implemented in a Bayesian, Friston kind of way, though. That said, getting more ecological about the role of perception in this model would be something I'd want to do).
- tasks are specified to the model in terms of the instructions about the outcome. Instructions, because they have no perceptual basis for specifying task (another reason to get perception on here!). Outcomes, as opposed to implementations, to distinguish them from motor commands specifying tasks in terms of trajectory features, such as minimal jerk.
- outcomes are subject to an effort penalty
No specific trajectory is ever specified; trajectories emerge as the model runs and optimises control instead of movements. The form of the analysis is to identify the optimal control strategy that produces the relevant values of a performance variable that mean the movement was a success.
Properties of the resulting model
The Minimal Intervention Principle
Optimal control systems follow this principle, specifically deviations from the average trajectory are corrected only when they interfere with task performance (T&J, 2002b). Exerting control to execute a correction entails multiplicative noise, and whatever it is you are correcting (e.g. amount of jerk) will incur an effort-dependent penalty as well. Doing something rather than nothing comes at a cost.
The net result of this is the UCM result; movement variability is constrained to lie along the manifold, with off-manifold noise actively reduced via control.
Movement flexibility
Optimal control systems are feedback control systems, and this feedback acts whenever required (as per the Minimal Intervention Principle) and whenever possible (see above). This means that movements are controllable in real time, at nearly all times, with respect to the task goal. If anything about the task changes, or if a different trajectory suddenly becomes a better solution, the system can still shift to this new trajectory - it can be flexible. This is not possible if the control system is working to a specified, desired trajectory, and it shows up in data.
No more planning
Under this analysis, movements are not planned then executed. They are planned, executed, and controlled by the same mechanism. This does away with the need for having internal representations implementing minimisation tricks to get stable movements. This also gets at a lot of fascinating issues with the idea of motor planning in general, which I might get into in future posts.
Task specificity
This approach does not dictate a 'best movement' but a 'best control strategy'. What's best then depends on exactly what you are trying to do. In other words, the solution will be task-specific which means we need to study a wide range of tasks to uncover the underlying optimality principles, instead of working a single task to death to demonstrate optimal features of particular solutions.
Summary
The uncontrolled manifold analysis is an implementation of the theoretical claims of the motor abundance hypothesis. The Todorov & Jordan papers are the application of stochastic optimal control theory to the problem of motor coordination. They both essentially arrive at the same place, namely that the optimal solution to the degrees of freedom control problem is the strategic deployment of control processes to only correct errors that bring you away from the task goal. This reduces the control requirements from impossible down to possible, without requiring a single solution (which the data clearly show no-one does anyway). Variation along the manifold then becomes 'the bliss of motor abundance'; it doesn't stop you from achieving the task goal but it creates the opportunity to switch movement trajectories to cope with local task demands.
For the record, I am loving this framework and finding it deeply useful. It is, of course, missing one thing - perception. What specifies the manifold, from the first person perspective of the organism? I have a potential solution, and will develop that in future posts.
References
Flash, T., & Hogan, N. (1985). The coordination of arm movements: an experimentally confirmed mathematical model. The Journal of Neuroscience, 5(7), 1688-1703.
Todorov, E., & Jordan, M. I. (2002a). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11), 1226-1235.
Todorov, E., & Jordan, M. I. (2002b). A minimal intervention principle for coordinated movement. In Advances in Neural Information Processing Systems, pp. 27-34.
Perceptual control theory does the same as this yet is based around perception and is more biologically plausible because it does not require inverse kinematic models in the first instance. Optimal feedback control theory used to be optimal control theory before they realised how fundamental the feedback is - a la PCT! www.pctweb.org
ReplyDeleteYes, this all entails perceptual control and no one is including this in their models. Ecological information about affordances are going to work, though
DeleteI follow your blog fairly closely even though I'm a clinical psychologist. The reason is I'm interested in understanding loss and restoration of control. I believe this is why people come in to see us for help.
ReplyDeleteThe ideas of minimal intervention and flexibility seem to generalise to other domains of loss of control outside of motor tasks. Flexibility is key to restoration of control in recovery from mental health difficulties for example.
I was stuck by the comment. “Are you working to make movements smooth" this would seem to require a perception of "smoothness", compared against some standard do how smooth the organism wants it to be, and if there's a discrepancy in smoothness ... then what? What does the correcting? There seems to be a little man in the head problem here isn't there? An ultimate controller keeping the 'control' as you say ”real time, at nearly all times, with respect to the task goal". What aspect of the goal? It needs detailed specification doesn’t it?
I’m thinking of a model similar to that implemented by Bill Powers which controls arm movements. It can be downloaded here : http://www.iapct.org/personal/index.html
In terms of the degrees of freedom problem Rick Marken has a paper here:
http://pss.sagepub.com/content/2/2/92.abstract that solves the problem by indepedent control systems. I don't know if the specific x and y co-ordinate variables is what a living system controls. It shows the parallel independent control systems architecture is plausible at least. Any simple enough to make it preferable over the sort of advance prediction you seem to be advocating.
I know you've had your clashes in the past with people who are PCT proponents so perhaps this post is just an irritation. However, your recent posts seem to be highly relevant to the detailed modelling that has been done that does seem relevant. There are gaps of course but it seems both this work and theirs could benefit.
No, it's not an irritation. I don't have much beef with PCT other than I think ecological psychology does everything it does, and then some, and that 'then some' is pretty important :)
DeleteWithout defending minimal jerk etc too hard, the idea doesn't really entail an infinite regress. The idea is that you replace all the possible movements with commands for the one that optimises jerk or some other feature. It doesn't really work, but not for regress reasons.