Thursday, 15 January 2026

Lecture 17: Pattern Recognition and Representation Bearers (Turvey, 2019, Lectures on Perception)

Last Lecture Turvey reviewed the basics of the computational approach, and highlighted again how it is just the latest iteration of the Cartesian programme. In this Lecture, he explores the specific topic of pattern recognition, which has been a major topic in the computational approach and exemplifies many of the major problems. These problems primarily boil down to systems requiring loans of intelligence to even come close to working.

Imagine you are walking along and you encounter something that barks at you. You are able to identify this as a dog, and also that this specific dog is a Yorkshire terrier. How do you do this? Perhaps you take advantage of prior associations; when you have previously encountered dogs, they have barked. But the problem is that in order to access this association, you must first successfully identify that the thing in front of you is a dog; using associations requires an initial step of successful categorisation (the Höffding step). 

That first step is a doozy. Trying to figure out the necessary elements (predicates) of the category dog is a nightmare, and trying to find these in sensations a la Helmholtz is essentially impossible. There are simply too many sets of predicates to be able to categorise everything, and the sets are poorly defined. The only solutions in the literature all begin with a loan of intelligence, in the form of 'you are looking at a dog'. The only actual way to lock this down properly is with informationL, but of course computational theories don't have this.

There is a related problem; recognising a pattern (as a dog, for example) means matching current experience to come stored information. But what form does that stored version take, and can it support pattern recognition? 

People have proposed that the stored form might be a template (a version the current experience does or does not fit), and propose various ways of computing the fit. But templates run into problems such as viewpoint variance. If I have only encountered dogs head on, and I encounter one from the side, it won't fit any template I could possibly have assembled. How can I build a template that will accommodate things like varying distance, size, orientation, etc? All solutions require some sort of loan of intelligence. 

People have also proposed that the stored form might be an exemplar (representing a sufficient number of features to identify it as a dog vs a cat, for example). You can then compute the distance of your current experience from that template in some mental space, and identify the pattern based on what it is closest to. This of course also presupposes that you can start the search close to the right answer, or else you would have to search everywhere and that is the original problem.

Other attempts to solve the problem have looked at how we might use features of the current experience to guide our search. But this runs into a common problem, which Turvey illustrates with examples from speech perception - the same set of features can lead to different perceptions, while the same perception can arise from different sets of features. Even the absence of features can shape perception. Turvey also talks about perceptrons and the pandemonium model, both of which rely on features but neither of which can solve the problem at hand. 

The fundamental problem with feature-based accounts is that there is no clear link between features of the environment and things like retinal stimulation or neural activity. Turvey talks about this in the context of the question 'why does a straight line look straight?', and the in-ness problem; why should the thing in the brain be taken to look like the thing out there. Straight lines in the environment are curved on the retina; they can be huge so how do single neuron line detectors (a la Hubel and Wiesel) bind their detections together; and how is it the brain can cope with things like prism distortions, given some time to recalibrate? 

These examples all bring the many major problems with the Cartesian approach to cognition, and the failed computational attempts to solve the problems reveal that this programme is hamstrung in the same ways as all the previous theories we have encountered. 

No comments:

Post a Comment