Pages

Wednesday, 15 April 2020

Lecture 1: What Kind of Systems Do We Study? (Turvey, 2019, Lectures on Perception)

The first thing to do is to characterise what it is we are studying when we are studying perception. Turvey states we are studying epistemic, intentional systems and spends this chapter explaining each term. He does the most work on system; intentional and epistemic are primarily just defined and noted as being features of the system we are going to have to engage with. 

As usual, I will try to efficiently review the key points and then add some reflections on what the chapter made me think about. 

System

A system is a set of components that arranged in some relation to one another, where the relations have implications for the activity of the components. A heap of something is not a system; there are pieces, and they are arranged relative to one another, but the properties of those pieces don't depend on the relations. A system entails that at least some of the components are affected by at least some of the other components.

This is a key figure to keep in mind throughout this book; it's the Composition, Environment, Structure (CES) model that describes the minimum set of elements required to specify a system

The system being studied is the part in the middle. It is made of components (the various checkered shapes), and at least some of this internal degrees of freedom are connected to (stand in some relation to) some other internal degrees of freedom (the thin arrows). The grey surround is the environment of the system; these are external degrees of freedom that can affect or be affected by the system. Again, at least some of the internal components stand in some relation to at least some of the external degrees of freedom (the thick arrows). 

Every system defines (picks out) an environment; no system, no environment, and environments are not systems (at least when considered from the point of view of the system). This set-up also immediately implies some kind of boundary - something that makes there be an 'inside' and an 'outside' of the system. 

Our job as scientists of perception is to correctly specify the system we are studying. We face all kinds of problems; we might only be able to observe part of the system, and different observers (scientists) might have different views of the system. This means what properties we think the system needs to do what it does might vary, or be flat out wrong. There are two basic consequences to worry about. First, thinking that something the system does is is something that the sub-system we are observing does (e.g. thinking cognition is an achievement of the nervous system, rather than the organism-in-context). Second, analysing the system into parts and processes according to the wrong procedure (e.g. disassembling a bike with a wrench and an angle grinder both produce parts, but only the former produces ontologically sensible pieces). 

Intentional

Perception and action systems are intentional, specifically they exhibit aboutness. We therefore need to have a correct understanding of what those systems are about - what are the intentional objects of perception and action, and how does the system get to be about them?

Epistemic

Perception and action systems have a very particular intentionality; they know about things, specifically their environments. The challenge we face is to identify what kind of material system can implement this kind of intentionality. The two most recent hypotheses are systems that compute over discrete symbols, and network systems engaged in parallel distributed processing of graded signals in a specific dynamical process. 

Turvey then just plants a flag for the future and says neither of these work, and we will need to move towards complexity science for our answer. 

Reflections 

In one way, Turvey has pitched his tent in a very specific place without too much justification. Specifically, the claim that perception is an epistemic, intentional system - each word brings very particular implications along. However, it's actually a completely standard move! The original people (e.g. Newell) who developed the computational approach to cognition understood cognition to be exactly this kind of thing too. One of the things that became crystal clear to me writing the Ecological Representations paper is that everyone, ecological and computational alike, is engaged in a good faith effort to answer the same question. 

Where the computational approach went wrong, in my view, is that it didn't take the hints the data were giving it that the computational characterisation of the cognitive system had cut in the wrong places/cut with the wrong tools. This is the second 'ontological' fallacy from above. There is an implication for our methods and meta-methods here: keep seeking laws. Behaviours that seem like magical or astonishing properties of the system you have described are actually going to be perfectly sensible properties of the actual, real system you want to be studying. If your account of the system makes the behaviour of the system incredible, query your account of the system before you give it the necessary magical powers.

I want to note here that this accords nicely with the mechanistic research programme. Bechtel, Craver and others all note that the process of decomposition and localisation of a mechanism into parts must always be checked and queried. If you break the system down and don't find parts and processes that, when brought together give you the behaviour of the system, you haven't carved the mechanism at it's joints. Turvey is going to reject what he calls the mechanistic approach, but it's important to note here that he is actually going to reject a mechanical approach and that the modern notion of mechanism is not at odds with the ecological approach (quite the contrary!). 

The figure above is going to be so crucial. We are studying a system, and the behaviours we are trying to explain are achievements of that system, and not just any of the sub-systems. We have to keep checking to see whether the system we think is responsible for a behaviour is the system, or merely a sub-system, and the hint will be (as above) whether the system we are invoking can do the work. Nervous systems are systems with bodies as environments; but is behaviour an achievement of the nervous system, or the nervous-system-body system? And that system has an environment too - how far out do we need to go before we have characterised the system correctly? This connects to arguments about whether cognition is extended (e.g. Clark) or bounded (e.g. Adams, Aizawa). 

One chapter in and already I'm thinking about basically every problem in the philosophy of mind! This is going to be a long book :)

No comments:

Post a Comment