Bechtel & Abrahamsen (2010) challenged cognitive science to make the move to mechanisms. They laid out the form of the necessary research programme (empirically decompose and localise the actual mechanism, and then model those real components), and they described all the benefits of mechanistic models we might really want to have. The main benefit is a move from mere description to proper explanation of the mechanisms we study, and this certainly seems like something we would want.
But can we do the work needed to get mechanistic models? Can we decompose cognitive systems into sensible components and model the result? Chemero & Silberstein (2008) and Silberstein & Chemero (2010) argue we can't, because cognitive systems are nonlinear and therefore non-decomposable; it makes no sense to break them down into parts because the behaviour of the system is more than the sum of those parts. They do argue, though, dynamical models are appropriate and count as proper explanations. This post will review but ultimately reject this argument.
Decomposition and Localisation
|This is not how you science|
These heuristics fail if there is nothing to decompose (as in the case of connectionist networks) or if the component's behaviour 'systematically and continuously affect one another in a nonlinear fashion' (Silberstein & Chemero (2010), pg 961-2, referring to Bechtel & Richardson, 2010). If the behaviour of the system is properly considered 'emergent', then trying to explain it via decomposition and localisation makes no sense.
Cognitive Systems Are Nonlinear and Emergent
Chemero & Silberstein then argue that cognitive systems are non-decomposable because they are typically the kind of nonlinear system that produces genuinely emergent behaviour. In nervous systems, the particular problem is degeneracy - the fact that wildly different structures can produce the same function. Decomposing these systems into their particular parts is pointless, they argue, because the particular parts will vary over individuals while still preserving the functionality.
They then discuss work from systems neuroscience that models systems at the system level, for example, as networks with particular topological features (as opposed to structural features; Sporns, 2011). Two networks can share topological characteristics (e.g. the way information flows thorough it) while having little if any structural overlap. Descriptions of networks at this level successfully capture invariant features of the system while making no reference to any specific parts or operations, and Silberstein & Chemero argue that these models explain because they allow us to 'derive, predict and discover a number of important things' (pg 964).
I think the fundamental mistake Chemero & Silberstein make is that they are trying to find mechanisms at the wrong level, and then abandoning ship because they don't see any mechanisms there. Going in at the level of neurons and finding no sensible parts should have been a hint that they were carving in the wrong place - in effect, they aren't following Bechtel & Richardson's (2010) decomposition and localisation heuristic properly.
All explanations will end up being multilevel (Craver, 2007) but you still need to pick a single correct level to ground your mechanistic model or else you will never get going. In the second post, I discussed Bechtel's analogy about understanding fermentation, and how neither chemistry nor physiology were the right place to begin - biochemistry was, and it had to be invented before anyone could use it.
This error has consequences. Chemero often points to the Haken-Kelso-Bunz model of coordinated rhythmic movement as an exemplar of the kind of dynamical model he has in mind. The HKB model simply captures some dynamical regularities in the behaviour. This model is fatally flawed - because it only describes, and makes no reference to a mechanism, it led to some importantly incorrect predictions about learning. This is a feature of such a model, not a bug, because it contains no real components, and this means it's just a description of a capacity, not an explanation of it.
In the next post, I will discuss our proposed solution to the mechanist's challenge to cognitive science. We begin by accepting that a) we want explanatory models, not just descriptive ones and that b) mechanistic models are the only way to get explanations. This means we need to identify the right level at which to ground a mechanistic research programme, and we need to sketch that programme out to serve as an exemplar for the field. That level is ecological information, and we will illustrate how this works using Bingham's ecological model of coordinated rhythmic movement.
Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science Part A, 41(3), 321-333.
Bechtel, W., & Richardson, R. C. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research. MIT Press.
Chemero, A., & Silberstein, M. (2008). After the Philosophy of Mind: Replacing Scholasticism with Science. Philosophy of Science, 75(1), 1-27.
Craver, C. F. (2007). Explaining the brain. Oxford University Press.
Silberstein, M., & Chemero, A. (2013). Constraints on localization and decomposition as explanatory strategies in the biological sciences. Philosophy of Science, 80(5), 958-970.
Sporns, O. (2011). Networks of the Brain. MIT press.