Pages

Monday, 17 May 2021

Mechanistic Models of the Brain

I'm getting increasingly interested in neuroscience, and how to make it ecological. I also think that the ecological approach is capable of supporting mechanistic explanatory models of behaviour and is the correct scale* at which to ground these models. This means that my current plan is to find a way to add neuroscience as a lower scale part of a model grounded at the scale of the organism-environment system. 

There's a lot going on in that sentence, though, so I want to unpack it a bit to lay out the rules. and the things I currently don't know. 

(*NB I am using scale rather than level throughout because the concept of a level is complicated and currently, I am convinced that scale is a better term. The argument continues, however). 

Mechanistic Models

There are several kinds of model, and these explain to varying degrees; I've mapped all this out before. A mechanistic model is one that contains terms that explicitly represent real parts and processes of the system being modelled. Typical cognitive models don't have these; they trade in functional descriptions of system capacities, like 'working memory' or 'the central executive'. Sabrina and I have argued that the ecological approach can do mechanistic models, because we can model real parts and processes such as affordances, limb dynamics, and information variables - things that really exist. Because mechanistic models are 'how-actually' explanations rather than 'how-possibly' descriptions, we suggested ecological psychology should do them because we could. 

This puts us at odds with the rest of ecological psychology, but only because of a couple of errors we haven't managed to correct yet. First, we have a tradition of explicitly rejecting mechanistic explanations (see Turvey, 2019, Lecture 5), but this is actually a rejection of the Cartesian mechanistic ontology. The neo-mechanist literature is not about ontology, but epistemology, and has no commitment to Descartes. Second, there is the Chemero and Silberstein argument that because we study complex nonlinear dynamical systems, we cannot successfully decompose the system into parts we can readily localise. they suggest we treat the dynamical models as explanatory. There is now quite a literature on how and why dynamical systems models are not explanatory, and in our paper Sabrina and I provided the example of Bingham's coordination research and model as a proof-of-concept that you can, in fact, decompose and localise parts of nonlinear dynamical systems. So the net result for me right now is that we can be mechanistic in our modelling, and because of all the benefits, we should be mechanistic in our modelling. 

Mechanistic Research

Mechanistic models need real parts and processes to model. This means you need to do some research to figure out what the real parts and processes of the system to be modelled are. This means you need to take the system, decompose it into hypothesised parts according to some scheme, and test to see if these are, in fact, part of the system. 

System decomposition is a heuristic process, and the only real rule is 'keep trying to decompose the system into parts until you find a scheme that yields real parts'. I can decompose an alarm clock with a hammer or a screwdriver. The former will yields parts but not parts that account for the functioning of the intact clock. The latter is more likely to yield parts that function in ways that do feature on the intact clock. This is the empirical measure of which decomposition scheme is best given the phenomena to be accounted for. I call this scheme the grounding scale for the model.

The Grounding Scale

As I noted, there are many ways to decompose a system into parts, but only one way that produces parts relevant to the phenomena to be explained. Bechtel has a famous example about explaining fermentation. Initially, people tried to explain the phenomena of fermentation in terms of the atoms and molecules involved; later, they tried at a higher scale, invoking fermentation-like processes. Both of these attempts failed to find real parts and processes relevant to fermentation. So someone invented biochemistry, which trades in large molecules with properties that emerge at that scale, and used those to successfully explain fermentation. 

This story has two valuable lessons. While the biochemical parts are indeed made of the smaller atoms and molecules, the biochemical scale was where fermentation-relevant parts and processes live. The lower scale of atoms was structuring but also being structured by the demands of the biochemical scale. The lower scale parts were doing their work with reference to the demands of the higher scale; that higher scale explains why the lower scale parts were doing what they were doing. This is what makes the biochemical scale the grounding scale, and it's also why finding the grounding scale is so important. Mechanistic models are phenomena-specific, and they are not intrinsically reductionist; in fact, they are typically multi-scale, but with one scale (rarely if ever the lowest scale) privileged as the grounding scale. 

The Grounding Scale for Mechanistic Models of Behaviour

The ecological approach comes with a very specific hypothesis about this, which is as follows.

The world is a set of material things with dynamical properties. One subset of material things with dynamical properties is a system we call an organism. In order to continue as an identifiable subset, that organism has to couple itself to other subsets of material things with dynamical properties in it's environment. That coupling is implemented via information and the perception-action loop it enables. Behaviour is what happens when a given perception-action loop is up and running. 

The hypothesis is therefore that the unit of behaviour is the organism-environment system, making this the grounding scale; everything the organism does with it's dynamical properties operating at various scales is organised with respect to the demands of this scale. Information, the coupling term, picks this out as the grounding scale. 

Brains are a subset of the material things that make up the organism and have characteristic dynamical properties. But how they organise is constrained by the organism-environment system grounding scale, so our title,

Mechanistic Models of the Brain

is an error; what I am actually pursuing is 

Multi-Scale Mechanistic Models of Behaviour, Grounded at the Organism-Environment Scale But Now Also Including the Neural Scale

That was too long to fit at the top, however, and we had to do some work to get here :)

This post has some big implications;
  1. The ecological approach provides a disciplined guide to identifying the grounding scale, which is how I can say
  2. there is no such thing as a mechanistic model of just the brain. There can only be mechanistic models of the neural scale of the mechanism implementing an organism-environment system. We are not in the reductionist business
  3. This is another way of articulating and defending the claim that neuroscience needs behaviour; it's not just a good idea, it's absolutely required
  4. Mechanistic models are task-specific, which is why a grounding scale is required - we need a way to articulate what a task is! 
But this post also raises some important questions in what I hope is a useful way;
  1. Given they must come together in service of a particular perception-action loop, what sort of real parts and processes can a nervous system form? Is a network a real part? Are the elements within a given Markov blanket (in the free-energy principle model) real parts? TALoNs? These are all candidates, but we can now test them with reference to a model of a behaviour at the grounding scale. 
  2. The nervous system is the material part of the organism with dynamical properties that can operate at very fast (millisecond) timescales, as well as other, slower timescales. Specifically, the nervous system can organise but then also reorganise itself into different parts and processes very quickly (cf the way your hand can organise into a gripping system but then reorganise into a punching system, for example). This means we need to accept transient, softly assembled parts and processes as real (but also again demonstrates the need for the grounding scale to understand why the nervous system is doing what it's doing). 
I'll start thinking about real neural parts in upcoming posts. 

1 comment:

  1. I have been pondering for a while whether a model that operates with "lego blocks" that are larger than neurons but more comcrete than you might find in a typical functional model would be considered mechanistic. Lets say we are modelling some action/perception loop. If I propose dynamical interactions of mechanisms as a driving force, but I can't point to individual neural circuits for each mechanism- would that disqualify my model from the mechanistic club? Is it enough that the behavior of the parts could be produced by neurons? Or is it that I need a more general proof that each concept I use has a neural equivalent that has been observed?

    ReplyDelete