Monday, 10 November 2025

Lecture 16: The Computational-Representational Perspective: Preliminaries (Turvey, 2019, Lectures on Perception)

The next four Lectures are specifically focused on the modern form of the Cartesian programme - the computational-representational approach. It's important to spend some time here, because this is the ecological approach's current opposition, and because the fact it is just yet another Cartesian programme matters, and is at the heart of most of our objections to it. 

In Lecture 6 we learned about the 3 grades of sense. First, there is reflex. The second is limited awareness of secondary qualities. The third is full mental awareness of what it all means. The proposed solution (Hobbes) for building the third grade out of the other two is the manipulation of symbols to do inference. In modern times, this is implemented as computation, and implemented in a representation.
Cognition has to be able to handle an indefinitely large number of possibilities as it infers what is going on in the world. A good way to provide that kind of flexibility in a finite system is a language-like set of symbols and a grammar. The basic assumption of this perspective is that computation happens in a language-like representational medium; Fodor's Language of Thought. 

This language-like system needs to be able to form concepts, which go beyond the data and summarise it. These must be formed via a process of hypothesis generation and testing with evidence, all of which entails representation (in the 'thinking about things in the absence of that thing' way). Hypothesis generation runs into the problem of projectible predicates, specifically that it is non-trivial to identify the correct predicate out of all the options it is easy to generate. Turvey points to Goodman's 'grue' example as the modern way to frame the problem; more information on that here. Fodor also argued for an innate LOT that preceded our attempts to do cognition in language like ways, so as to be able to bootstrap our way into solving the problem; a loan of intelligence.

Regardless of these problems, the computational-representational perspective assumes that the brain is the medium in which a computer is implemented that can do the necessary symbol manipulation to generate the third grade of sense. Some useful terms; the bearer of the work is the symbols, which have semantic content. The ground (of grounding problem fame) is the relation that allows there to be content, and the interpretant is what this all means for the agent/interpreter. 

The grounding problem is the problem of why these symbols have content. Content matters for representation - if it doesn't have content, it makes no sense to say that it represents anything. Turvey notes a couple of ways to express this, but I think the most common way is to say a representation must designate to be a representation (discussed here). This problem is, of course, going to be our key sticking point.

The next element to the computational perspective is Shannon information, which is about expressing data in terms of bits. Shannon information is central to the operation of computers, so early work investigated whether human activity could be expressed in bits. Some of that work led to the capacity of information processing being 'the magical number 7, +/-2', but the problem was that it wasn't 7 bits, it was 7 chunks, and the size of those chunks varied wildly over practice. The information operated on by mental 'programmes' doesn't seem to be Shannon information, even if there are such things as programmes in the mind. 

Again, regardless of the issues, this work also led to a particular approach to artificial intelligence that treated intelligence as disembodied, about knowledge (vs behaviour), and language-like. This didn't really work the first few times it was tried, and does not seem to be working very well in more recent versions. 

To summarise the preliminaries: the computational-representational perspective makes a series of problematic moves right off the bat, but has often proceeded as if these problems were solved or not problems. The next Lecture will look at some of the specific consequences of chasing this approach. 

No comments:

Post a Comment