I'm slowly working my way through Olaf Sporn's excellent book, Networks of the Mind. The purpose of this book is to introduce neuroscientists to network theory, and vice versa; I'm eavesdropping and tooling up on both. It's slow going only because it's pretty much all new territory to me, but I'm seeing a lot of potential in the overall approach to the brain, and this just confirms for me that Sporns understands what he does pretty deeply.
Anyway, a while back, Bruce Hood tweeted the following:
Fact: in the digestive tract of the lobster, its nervous system can have 100k-200k different neural states that produce the same behavior
This struck me as an astonishing fact; to my mind, it throws the idea that hunting for the neural correlates of behaviour into serious doubt. At the very least, it needs to radically change what you expect to find. I finally found the reference for this fact in Sporns' book (Prinz, Bucher & Marder, 2004; download), and Sporns has some interesting context for this fact (which Hood undersells - see below) and why it's interesting, rather than soul-crushingly depressing. There are some interesting potential consequences relevant to my current ponderings on the brain, although it's definitely still at the 'these sound like the same sort of principle' level of analysis.
Inter-individual variation is a fact of cortical organisation; while a given type of brain (e.g. human) exhibits broad structural similarity the specific details are enormously variable. However, these structurally variable brains support functionally similar behaviour - this is referred to as degeneracy:
Structurally variable but functionally equivalent networks are an example of degeneracy, defined as the capacity of systems to perform similar functions despite differences in the way they are configured and connected (Tononi et al, 1999; Edelman & Gally, 2001)Sporns, 2010, pg. 69
Degeneracy is a strength, not a weakness of a system. It provides critical redundancy to a system, so that it can continue to exhibit a given behaviour in the face of noise or damage (an ability which Sporns also refers to as functional homeostasis). It's also a ubiquitous feature of biological systems across multiple scales, presumably because these continually face the task of performing the same basic function under highly variable conditions.
Degeneracy in the humble lobster gut
The Prinz et al (2004) paper is an astonishing demonstration of the degree of degeneracy available to even a simple neural system. They begin by noting that neuroscience typically treats variation as noise, and removes it by averaging, etc. They were wondering how justifiable this is. To understand the functional consequences of structural variation, they modelled a simple motor circuit in the lobster (technically, the pyloric rhythm of the crustacean stomatogastric ganglion). Combining all legitimate neuron types and synapse strengths gave them 20,500,000 model circuits. They then simulated the motor output of all of these circuits. 4,047,375 (20%) of these produce a pyloric-like output; this was reduced to 452,516 when the simulation was constrained by recordings from lobsters, and even here all 150 possible combinations of neurons were capable of producing pyloric-like activity. Finally, pyloric-like activity occurred for synapse weights across the entire functional range (with the exception of one particular synapse weight which had to be very weak: this, it turns out, matches the biology of the actual system very nicely). In effect, the behaviour of the network (and not just the individual neurons) exhibited extraordinary functional homeostasis (450,000 ways to produce the same rhythm) and the parameters were stable over several orders of magnitude (so long as the appropriate compensation has occurred elsewhere in the network).
The lesson: the nervous system is not trying to maintain a specific set of wiring connections and weights; it is trying to maintain the capacity to perform a specific function. Prinz et al's analysis supports the idea that this function can be defined at the network level, not just at the level of individual neurons (and I'm going to suggest this principle extends to perception-actions systems as well). Sporns then notes that this is clearly a feature, not a bug:
Such homeostatic mechanisms are essential for the long-term stability of the brain given the continual remodeling and structural turnover of its cellular and molecular componentsSporns, 2010, pg.68
This reminds me of something I saw in Sporns cognitive neuroscience class at IU; video of neurons extruding and retracting axons, forming and reforming connections,something they apparently simply never stop doing. I've never forgotten just how busy the neuron was, on a millisecond time scale; structural stability is clearly not what the brain is interested in. There is, therefore, tremendous and ongoing flux that is part of the normal operation of the brain (let alone what happens following injury) and this degeneracy provides this dynamic system with functional stability in the face of these perturbations.
Some implications for the brain
This implies a couple of things. First, the brain is not static in any way, not even the ways we first thought might be necessary to preserve stable function. The brain is in the state it's in through use, and that use can produce all kinds of variability that don't affect function.
Second, this variability will be both within and between people; Sporns again:
Price and Friston have noted that human brain networks display degeneracy since different sets of brain regions can support a given cognitive function (Price and Friston, 2002). Cortical activation maps obtained from functional neuroimaging studies of individuals often show only partial overlap for a given cognitive task, suggesting that different individuals utilize different (degenerate) networks.Sporns, 2010, pg. 69
While this doesn't make neuroimaging useless, it is an important caveat that is rarely remembered. This kind of variability is also the reason that there are limited lessons to be learned from clinical cases; while these are interesting and do provide hints as to what regions of the brain are involved in, but
The loss of a subset of all regions that are reliably activated in a given task may not disrupt task performance, indicating that individual regions may not be necessary or that recovery processes following brain injury can configure structurally different but functionally equivalent networks.Sporns, 2010, pg. 69
It's not clear to me that much of cognitive neuroscience is worrying about this, although clearly people like Sporns are.
Degeneracy in perception-action systems
Degeneracy in perception-action systems
This organising principle (preserve functional outcome, implement it with whatever you've got) actually extends to the level of the perception-action system. Take a task like reach-to-grasp (prehension); you can implement the functional outcome of transporting your hand from where it is to the object via an indefinitely large set of movements. We typically produce very efficient, very stereotyped prehension movements to a given target, but that's only true when the system's resources remain the same. If you change any part of this task (the distance to the object, the size of the object, the goal of the prehension movement) the prehension movement will rescale itself to match and thus preserve the outcome. My colleague Mark Mon-Williams talks a lot about end-point control - prehension is organised with respect to the control of the hand, not the intermediate joint configurations, and those simply do what's required.
A recent article demonstrated this type of principle. Castro et al (2011) investigated what the system is learning when it corrects errors. There are two options: adapt your next movement as a function of the error with respect to what you planned to do, or as a function of the error with respect to what you actually did. Research in this field typically assumes the former (e.g. Wolpert et al, 1998); Castro et al demonstrated that the system actually does the latter, and, interestingly, the robotics literature settled very quickly on these types of algorithms:
There are other examples of this. For instance, in my own work (Wilson et al 2005a, b) I've had people produce coordinated rhythmic movements when I've transformed the display to show a different rhythm. Specifically, we had people move at 90° and 180° to produce 0° on the screen; this stabilises the typically unstable movements almost entirely. In other words, people were more concerned with producing the outcome (0°) and less concerned with how they were doing it - functional outcome trumps implementation. Prism adaptation studies demonstrate the same principle.
This helps explain why, when studying perception-action, we typically aren't that interested in the movements themselves. Instead, we use the movements as action measures of perception (skilled action is typically fairly stable and only changes when the system perceives a difference in the task requirements). Simply reporting the kinematics typically tells you very little about what the system is up to - after all, it either succeeded or it failed, and reporting the specific implementation is only interesting if you have some reason to expect a specific solution.
Summary
A recent article demonstrated this type of principle. Castro et al (2011) investigated what the system is learning when it corrects errors. There are two options: adapt your next movement as a function of the error with respect to what you planned to do, or as a function of the error with respect to what you actually did. Research in this field typically assumes the former (e.g. Wolpert et al, 1998); Castro et al demonstrated that the system actually does the latter, and, interestingly, the robotics literature settled very quickly on these types of algorithms:
The choice of these learning rules is grounded in the idea that adaptive changes should be provably stable in the sense that, under a set of reasonable assumptions, updated internal models should never result in worse performanceIn essence, this work shows that we learn about the functional outcomes of actions, and not about the implementation of the actions. Learning organises the perception-action system with respect to what happened.
Castro et al, 2011
There are other examples of this. For instance, in my own work (Wilson et al 2005a, b) I've had people produce coordinated rhythmic movements when I've transformed the display to show a different rhythm. Specifically, we had people move at 90° and 180° to produce 0° on the screen; this stabilises the typically unstable movements almost entirely. In other words, people were more concerned with producing the outcome (0°) and less concerned with how they were doing it - functional outcome trumps implementation. Prism adaptation studies demonstrate the same principle.
This helps explain why, when studying perception-action, we typically aren't that interested in the movements themselves. Instead, we use the movements as action measures of perception (skilled action is typically fairly stable and only changes when the system perceives a difference in the task requirements). Simply reporting the kinematics typically tells you very little about what the system is up to - after all, it either succeeded or it failed, and reporting the specific implementation is only interesting if you have some reason to expect a specific solution.
Summary
Degeneracy seems to be an organising principle that extends across multiple systems and levels in biology. The specific structural implementation is rarely that informative: it may simply be the solution evolution happened to find first, by chance. The real question of interest is, what is the function performed by that system, and how does it relate to the task? This makes a task-specific approach all the more important, and re-emphasises my point that in order to understand the brain, we have to first understand what it is it needs to be up to - the job description is critical, and changing that (as the ecological, embodied approach does) has implications for neuroscience that have yet to be fully acknowledged.
References
Gonzalez Castro LN, Monsen CB, & Smith MA (2011). The binding of learning to action in motor adaptation. PLoS computational biology, 7 (6) PMID: 21731476 Download
Prinz, A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate circuit parameters Nature Neuroscience, 7 (12), 1345-1352 DOI: 10.1038/nn1352 Download
Wilson, A. D., Collins, D., & Bingham, G.P. (2005a). Perceptual coupling in rhythmic movement coordination: stable perception leads to stable action. Experimental Brain Research, 164(4), 517-528 DOI Download
Wilson, A.D., Collins, D., & Bingham, G. P. (2005b). Human movement coordination implicates relative direction as the information for relative phase. Experimental Brain Research, 165, 351-361. Download
Wolpert, D., Miall, R., & Kawato, M. (1998). Internal models in the cerebellum Trends in Cognitive Sciences, 2 (9), 338-347 DOI: 10.1016/S1364-6613(98)01221-2 Download
This is off-topic. I'm commenting in your most recent post, since that is more likely to be seen.
ReplyDeleteMy intention is to introduce myself. I'm a mathematician and computer scientist with an interest in cognition. Like you, I think traditional cognitive science is way off track. And I do think Gibson was on the right track.
I disagree with the conventional view (what the brain is mainly doing is computation). Instead, I suspect that what the brain is mainly doing is measurement. I suspect that Hebbian learning is mostly calibration so as to ensure consistency across different modes of measurement.
Some specific comments on your post:
I agree with you on the search for neural correlates. It has seemed obvious to me that the brain is far too complex to reverse engineer. That's why I have been looking at understanding the theoretical principles.
On degeneracy: I see that as similar to the fact that I can measure things with a tape measure or a wooden ruler. And I can measure in centimeters or in inches. There's no reason that all brains should do it the same way.
Okay, that's enough of an introduction for now.
Nice to meet you - glad you're enjoying the posts! (I have the comments RSS feed in my feed reader, by the way, so I will see comments on old posts if you feel like commenting on specific things.)
ReplyDeleteMeasurement and calibration are topics that come up in my posts here a lot; it's something the perception-action approach spends a lot of time on. And explaining that with cm vs in rulers is my favourite example, so I think we're on the same page :)
'Degeneracy' is an unfortunate term, why on earth would they choose it? All we need is some US congressman standing up and declaring that, "The National Science Foundation is wasting taxpayer money to promoting research by people who actually think degeneracy is a good thing!"
ReplyDeleteReally liked the post, the lobster data is incredible, and the chaos of brain structure at the lower levels deserves more attention. (Not necessarily that psychologists should be trying to solve the mystery, but they should not be making assumptions that are clearly wrong in the face of such evidence.) This relates to my now-abandoned interest in modeling rat-pup behavior, I'll try to post on that later this week, and see if I can tie it together.
Degeneracy is a technical term from biology. I can't find why it was used there, though.
ReplyDeleteI ran into a problem similar to the one that Eric touches upon. Senior colleagues were uncomfortable with the choice of vocabulary to describe structural variation underlying functional plasticity (though they didn't fault the science). I went over to Leiden, undertook a few months of archive research and put together this article:
ReplyDeletehttp://www.academia.edu/477056/Degeneracy_at_Multiple_Levels_of_Complexity