Tuesday 7 July 2015

Brains Don't Have to be Computers (A Purple Peril)

A common response to the claim that we are not information processors is that this simply cannot be true, because it is self-evidently the case that brains are transforming and processing information - they are performing computations. Greg Hickok throws this ball a lot, and his idea is clear in this quote from his book 'The Myth of Mirror Neurons':
Once you start looking inside the brain you can’t escape the fact that it processes information. You don’t even have to look beyond a single neuron. A neuron receives input signals from thousands of other neurons, some excitatory, some inhibitory, some more vigorous than others. The output of the neuron is not a copy of its inputs. Instead its output reflects a weighted integration of its inputs. It is performing a transformation of the neural signals it receives. Neurons compute. This is information processing and it is happening in every single neuron and in every neural process whether sensory, motor, or “cognitive.”
Hickok, pg 256.
There are two claims here. First, neurons are processing information because their input is not the same as their output; they are transforming the former into the latter. Second, this process is computational; 'neurons compute'.

This is a widely held view; psychologist Gary Marcus even wrote about this in the NYT saying 'Face it, your brain is a computer'. In response, Vaughn Bell at Mindhacks posted about this op-ed and this issue in a nicely balanced piece called 'Computation is a lens'. He sums up the issue nicely by asking 'Is the brain a computer or is computation just a convenient way of describing its function?'. The answer, I propose here, is that computation is a fantastically powerful description of the activity of the brain that may or may not be (and probably isn't) the actual mechanism by which the brain does whatever it does. This is ok, because, contra Hickok,  not every process that sits in between an input and a different output has to be a computational, information processing one


The Polar Planimeter
I'd like to illustrate with the example of the polar planimeter. My favourite perceptual psychophysicist, Sverker Runeson, wrote a paper in 1976 called 'On the possibility of "smart" perceptual mechanisms'. In it, he described the idea of a smart device and used the planimeter as an example.I blogged about "smartness" and this device in one of my first posts on this blog because it's one of the standard examples in the field of how to get behaviour without computation; the other, of course, is the Watt's Steam Governor

The polar planimeter is a device for the direct measurement of the area of irregularly shaped surfaces. It consists of two arms joined at a wheel. You anchor the 'pole' arm somewhere outside the shape and trace the shape with the 'tracer' arm. The wheel mechanism turns at a set rate as you trace and when you finish tracing you simply read off the current state of the wheel; that's the area. 
Figure 1. The polar planimeter; pictures with annotations borrowed from this description of their operation
Planimeters have been in use for a long time; the first definite record of one dates to 1836 but the idea had been around for at least 20 years by then (see this great online book for more). The algorithm that describes why their activity produces area is based on Green's Theorem (although the device predates the 1851 proof of Green's theorem). 

Planimeters work because they are built in a particular way. They are dynamical systems with a particular composition, organisation and calibration, and it's the time-extended activity of this system in the context of an appropriate task that produces functional behaviour (the measurement of area) in a smart fashion. Runeson liked this example because area is the kind of thing people assume you need to get to by measuring something simple (like some lengths) then transforming those via a computation (multiplication) into an area. To a planimeter, area is the simple unit and Runeson used this idea to show that the direct measurement of Gibsonian higher order invariants was possible, if the measurement device was right.


Here's the other interesting thing, more relevant to the current discussion. Polar planimeters take an input (the activity of the tracing arm) and turn it into a different output (a measurement of area). According to Hickok, that's computation and information processing (and Greg has told me he thinks the planimeter is computing). Except that it's not. Nothing in the planimeter is implementing any of the steps required to solve this problem computationally. Worse, if you describe its activity computationally, you will not have accurately described how it produces area from the act of tracing. You will not have the right mechanism, and you will therefore ask the wrong questions as you do science on the planimeter (e.g. you'll go hunting for the 'length detectors' and the 'multiplication module' or whatever the equivalents are demanded by Green's Theorem). 


The consequences for science

Neuroscientists point to neural activity observed during a task and say that it must be implementing some computational step in the process, because that step is required. That step is only required if the system is solving the problem computationally, though, so this reasoning is circular (I say the brain is computing and point to this activity as computation but I only interpret that activity that way because I say the brain is computing...). Computation is not the only option, and behavioural data (on prospective control, collective behaviours like swarms and herds, and more) shows that computation is not what the system ever seems to be doing.  If perception is instead smart like the planimeter but we begin with a computational description instead, we'll go looking for the wrong things and interpret, say, neural activity observed during the task incorrectly ('that activity must be implementing the multiplication step').

Summary

Just because your output doesn't match your input doesn't mean you were computing in-between. The activity of dynamical systems can achieve this basic goal without implementing any computations; weather systems do not compute how to respond to changing climate, polar planimeters do not compute area and, to answer Vaughn Bell's question, stones do not compute their projectile motion in order to fall appropriately. This is, in fact, the reason why ecological psychologists whole-hardheartedly embraced dynamical systems theory and led the charge to bring it into psychology as a better tool box than computation. This is also why we don't have to be information processors.

Further reading
Smart perceptual mechanisms 
What Else Could It Be? The Case of the Centrifugal Governor 

Gary Marcus:  'Face it, your brain is a computer'
Vaughn Bell: 'Computation is a lens'

Runeson, S. (1977). On the possibility of "smart" perceptual mechanisms. Scandinavian Journal of Psychology, 18 (1), 172-179. Download

21 comments:

  1. You are conflating computation and digital computation. There is a branch of computability theory that deals with analog computation, and some of these people would say that planimeters are analog computers. Do you think they are confused to think so?

    Of course, input-output mapping is not enough. This is merely weak equivalence, and we need to know the structure of the process, too. See my paper for more detail:

    https://www.academia.edu/1825490/Is_the_mind_a_Turing_machine_How_could_we_tell

    For more details and complete story, see my 2013 book:

    https://mitpress.mit.edu/books/explaining-computational-mind

    ReplyDelete
    Replies
    1. Ah, and one more thing. If your description of a process as computational is true, then the process is computational. Otherwise, such a description is not to be taken literally, but it's note *mere* description. If it's a useful false description, it's probably idealization. But is that really the case? I don't think so, as requirements used to talk of computation in the case of physical computers are exactly the same in computational neuroscience as in the field of biological computation in DNA computing.

      Delete
    2. I feel like "analog computation" is an oxymoron. ?? A system can respond to some state of affairs outside of itself, and if it does so by analogy, is that computation? For example, in an old fashioned speedometer, a speed sensor responds by sending a voltage. The voltage is proportional to the speed of the car. The voltage drives a needle on a display. While this "output" (the display) is a transformation of the "input" (the speed of the car) where is the computation? What has been computed? If the brain responds to the world in a similar way - as something that measures certain properties in the world by analogy, of what need is there to invoke computation? Some might say that a planimeter is an analog computer but they would be wrong. It is a meter. It measures, it does not compute.

      Delete
    3. Well, if computability theorists call this 'computation', why is this not computation? It's a whole field of enquiry, and lots of machines are analog computers. I don't like gerrymandering the terminology; let's use the terms as the general public does. And there's a lot of analog computers out there: https://en.wikipedia.org/wiki/Analog_computer

      Delete
    4. Analog computing does seem like an oxymoron. I remember learning about the antikythera and being really surprised that people called it a computer.

      I would hazard a guess that computability theorists call this computing for reasons of convenience and history, rather than anything more fundamental.

      Delete
    5. Well, there is one important reason: there are models of computation that accommodate both analog and digital computing, notably Abstract State Machines. Mathematically, this difference does not change much. See this paper for example: http://link.springer.com/chapter/10.1007/978-3-642-29952-0_49

      Basically, this work is advanced by one of leading figures in computability, Nachum Dershovitz, and based on Yuri Gurevitch's very important work on evolving algebras (=abstract state machines).

      Delete
  2. Thanks for the post Andrew. How would you rigorously define a ‘computation’? Could we not describe a computer in a similar way to the planimeter? Say I ask the computer to add 1+1. To do this the computer has no idea what a number is. It simply changes state in a way that is governed by physical laws. I use the change in state to represent addition. But this is an abstract concept in my head, not one built into the computer. Nothing was ‘added’ in the hardware. If you described the computer as ‘adding’ something you would not have a good description of what the computer actually did. Different processors implement the same ‘computation’ in physically different ways. A description of one would not be a good description of the other.
    However, it is useful to say the computer added two numbers, even though it didn’t, because the change in state of the computer corresponded to the abstract representation of something in my head. I think your description of a planimeter works very well for my computer: A dynamical system with a particular composition, organisation and calibration, and it’s the time-extended activity of this system in the context of an appropriate task that produces functional behaviour in smart fashion.
    But then my computer isn’t performing computations? But by definition it is…. So maybe our problem lies in defining what a computation is?

    ReplyDelete
    Replies
    1. I think you might be right about the problem of defining computation. But is there any value in defining it so loosely that any type of transformation is called computation? The term seems to lose any meaning at all, and if so, then it has no usefulness as a description of what the brain does. At best, computation is only a metaphor for what the brain does. It has served some utility as a metaphor, primarily in driving a lot of cybernetic-style cognitive psychology up through the 1980s, but I get the feeling that as a metaphor it has outlived its usefulness and we need a new metaphor.

      Delete
    2. Information, feedback, dynamic systems are all very general terms that apply to lots of physical systems. But this does not make them void nor metaphorical. There's nothing metaphorical about neural computation in Izhikevitch's models of neurons, for example.

      Delete
    3. Computers do just change state; but they change state so as to implement an algorithm. The planimeter changes state, but not so as to implement any algorithm. It's behaviour can be described with an algorithm, but that's not what it was designed to do.

      Delete
    4. The design is important, but look at Dershovitz's and Gurevich on algorithms in constructive proofs in geometry. These are step-by-step procedures, and some people argue that they operate on incomputable reals.

      Delete
    5. And let me be clear how I think the design can be used to decide such questions. Imagine that someone says that the vacuum cleaner is supposed to compute random number by rotating the particles of dust. Now, of course, the vacuum cleaner doesn't have the function to do so, and rotating particles of dust inside the cleaner is not essential to the capacity of the cleaner to clean, though it is one of the ways to build vacuum cleaners. However, there is much more in the cleaner, such as pipes, brushes, suction nozzles, which are not explainable with the hypothesis that what it's doing is computing random numbers. As for the planimeter, are there any parts that do not contribute to getting the value that corresponds to the volume of the area? If I were compare it to other measuring devices, such as a standard ethanol thermometer, it would be clear that the thermometer computes, at best, a very trivial computation, and it's something like an identity transformation. These computations are trivial and not so informative about the design of these devices, so adding a computational gloss to the mechanistic explanation of such devices may be counterproductive: no new explanatory or predictive power. But for neural systems, the situation is different; they do compute very complex transformations, and have immense computational powers (at least in, say, Eliasmith's account of neural ensembles).

      Delete
  3. "[I]f you describe its activity computationally, you will not have accurately described how it produces area from the act of tracing"
    "Accurately"? Or do you mean "completely"? What you might be getting at is that the computation of a device like a planimeter, or alternatively a physiological component like a brain region, is what we might otherwise call its behavior; i.e., the relationships between a system's inputs and outputs. This is a different "level of understanding" (to use Marr's terminology) as the algorithm by which that behavior is accomplished, or the material by which the algorithm is implemented. These levels, including the physics of the material itself, can in turn be viewed according to behavior, algorithm, and implementation. Your point about what level is most useful for understanding a system will depend on what exactly you mean by understanding, and what you want to do with the knowledge (treat illness? create artificial devices?). These topics have been taken up by far smarter people than I, at least a few of whom are cited in this article: http://www.ncbi.nlm.nih.gov/pubmed/25256163

    ReplyDelete
    Replies
    1. I mean accurately. Solving a problem computationally and dynamically is two different things and involve different components doing different things. If your system is dynamical but your description is computational all your science on the system will ask the wrong questions, eg where in the planimeter is area calculated?

      Delete
    2. Well, solving the problem computationally is physically the same thing as doing it dynamically in all physical computers Computers are physical systems that evolve in time, and some of them are quite strange (analog/digital for example). For a deeper look on the connection between dynamics and computation, see here: https://global.oup.com/academic/product/computation-dynamics-and-cognition-9780195090093?cc=pl&lang=en&

      Note that computation need not be symbolic: analog neural networks (sometimes realized on neuromorphic chips) are not symbolic but they're part and parcel of current computational neuroscience.

      Again, planimeter is a trivial example: the area is obviously the current state of the wheel, as you stated. But it's a trivially easy computation.

      I would like to see an argument that why difficult non-linear transformations as posited by Chris Eliasmith are not computational. I mean the ones involved in SPAUN: http://www.sciencemag.org/cgi/doi/10.1126/science.1225266

      Delete
    3. Well, solving the problem computationally is physically the same thing as doing it dynamically in all physical computers
      In computers, sure. But that doesn't mean brains are computers or that their dynamical activity is computational.

      Again, planimeter is a trivial example: the area is obviously the current state of the wheel, as you stated. But it's a trivially easy computation.
      It's actually not; the maths describing this is actually quite complicated. Planimeters measure the area of complex shapes, where length x width doesn't mean anything.

      Delete
    4. I would like to see an argument that why difficult non-linear transformations as posited by Chris Eliasmith are not computational. I mean the ones involved in SPAUN: http://www.sciencemag.org/cgi/doi/10.1126/science.1225266
      SPAUN works on images for it's "vision", so it's already off on the wrong foot.

      Also I thought this was revealing:
      A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains.
      Brains are only equally complex to behaviour if they are entirely responsible for behaviour. But they aren't, and anyway complex behaviour can emerge from simple systems. It sounds like SPAUN is just trying to do too much of the work.

      Delete
  4. This recent book on what the hell computation is might be of interest to people: http://philosophyofbrains.com/2015/08/10/is-computation-abstract-or-concrete.aspx

    ReplyDelete
  5. This may be a fairly naive suggestion, but it seems plausible that much of the brain is evolved to work in the non representational way REC posits, while recent neural innovations allow more algorithmic, representational processes.

    ReplyDelete
    Replies
    1. This comes up a lot. I've started to think that actually, the reverse should be true; if evolution spends time building one kind of system, it's more likely to simply recruit that system over and over rather than build a new one from scratch.

      Delete
  6. Of course brains do not have to be computers. Brains are too advanced to be something simple like computers are. Computer is just a big calculator, human's brain is much more than that.

    ReplyDelete