Monday, 13 October 2025

Lecture 15: Gestaltism III: Experience error, CNS error, Psych-neural Isomorphism, Behavioural Environment (Turvey, 2019, Lectures on Perception)

The two previous lectures reviewed some of the changes happening in related fields to psychology that were showing up in Gestalt psychology, specifically the notion of fields, the understanding that behaviour could emerge from dynamical processes, and the limits on mechanical approaches to living systems. This chapter finally actually engages with Gestalt psychology, and discusses how it was applying these changes.

Lecture 14: Gestaltism II: Fields, Self-Organization, and the Invariance Postulate of Evolution (Turvey, 2019, Lectures on Perception)

In the previous Lecture, Turvey discussed the machine metaphor in which things are just equal to the sum of their parts. In this Lecture, Turvey introduces the Gestalt notion that wholes are different from the sum of their parts. Well, specifically, he discusses the notions of fields and self-organisation, and the kinds of physical systems that these entail; Gestalt psychology is mostly just a launching pad for the broader discussion. 

Monday, 29 September 2025

Lecture 13: Gestaltism I: Atomism, Anatomism, and Mechanical Order (Turvey, 2019, Lectures on Perception)

We are nearing the end of the Foundational Concepts section of the book, and we have arrived at one final historical attempts to explain perception; the Gestalt school. This chapter doesn't actually spend any time on Gestalt, however, but instead lays out the things Gestalt was developed to oppose. Those things are atomism, anatomism, and mechanical order.

Tuesday, 17 June 2025

Lecture 12: The Space Enigmas IV: On Learning Space Perception (Turvey, 2019, Lectures on Perception)

This Lecture is in roughly two parts. The first brief section walks through a Helmhotzian method for perceiving depth via unconscious inference. This inference process is learned (Helmholtz wanted to be an empiricist) but as usual entail loans of intelligence in the form of some givens not acquired via experience. This then raises a question: what exactly is experience, and what about it is used to be the basis for future inference? It turns out what counts as the relevant parts of experience can be very non-obvious, raising many problems that need more modern, less Cartesian solutions. 

Lecture 11: Doctrines of Sensations and Unconscious Inferences (Turvey, 2019, Lectures on Perception)

This Lecture is about Helmholtz, and his theory of sensations being integrated into perceptual experience via unconscious inference. Everything in here should look very familiar to anyone who has ever taken a Sensation and Perception class, because modern theories of indirect perception are literally versions of this account. (I'll note as well that, while there are embodied and ecological interpretations of the free energy principle (e.g. Bruineberg et al, 2018), one common interpretation is that it is an implementation of Helmholtz's unconscious inference; e.g. Friston and Kiebel 2009; Hohwy, 2013). 

Unconscious inference is a firmly Cartesian programme. It is about using the physiological measurements of things like light, and inferring their underlying causes (i.e. what is happening in the world to be perceived). Unsurprisingly, it is going to run into the same main problem, namely unrepayable loans of intelligence. 

Thursday, 22 May 2025

Lecture 10: The Space Enigmas III: Local Signs and Geometrical Empiricism (Turvey, 2019, Lectures on Perception)

The previous two lectures have covered aspects of the problem of space perception. We perceive objects, and these inhabit space - they are at some distance from us and each other, etc. So what is space, and how do we come to know about it? We've tried using touch (a sense that lives in 3D) to provide the necessary clues (this was Berkeley, and the story of Flatland). This doesn't work because touch isn't a perfectly reliable source of space information, it's a perceptual system like vision and faces the same kinds of problems. We've also tried to just make space a necessary feature; not an object of perception, but a mode of perception (this was Kant, and the power of Euclidean geometry). This doesn't work, because there are perfectly coherent non-Euclidean geometries, that all provide different answers when measuring the same space. As soon as there is more than one geometry, choosing one becomes part of the problem and you cannot simply assume it. 

Given this, how might you go about selecting the right geometrical description to use as the basis for your experience of space? Turvey discusses one major attempt to show how this might happen, specifically Helmholtz's account of how to derive a geometry (a notion of space) from non-spatial local signs

This Lecture involves quite a bit of mathematical detail, which I am not going to get into here because it's slightly besides the point. The overall goal, however, is to take some sensory raw material that isn't intrinsically about space (because it can't be), and work to turn it into an experience that is about space. This is going to be the start of the move to talk about sensations and perception, the organising principle still at the heart of all non-ecological discussions of how we come to experience the world. More on this in the next lecture. 

Friday, 22 November 2024

Generative AI Pushes Outcome Over Process (And This Is Why I Hate It)

I really hate generative AI, because there are many reasons to hate it. It's abilities depend on stolen data; it uses so much electricity it's messing with climate goals and made them restart 3 Mile Island; it's riddled with all the biases you'd expect; and the sales pitch from companies with billions of dollars on the line has smelled of snake oil from day one. 

I'm also mad at it for professional reasons. 

First, I am an ecological psychologist, so I have things to say about whether these systems can be intelligent. They can't: they are made only of language, and intelligence is made of much more than this. Also their basic trick (unreflective extrapolation of the next most likely thing to say) isn't even how language works, let alone intelligence. 

But second, for the purposes of this post, I am mostly mad about it as an educator. My students are deluged with AI tools. Grammarly relies on it; Adobe keeps trying to summarise pdfs for them; and ChatGPT promises to help them write better essays. My students are busy and stressed: they're often working jobs to support themselves, and their courses ask a lot of them too. We know they take shortcuts, and AI is the most powerful one they've ever had access to. Universities are busy drafting and redrafting policy documents about fair use of AI, because we have no way to enforce a ban on its use, but even these documents accept the flawed premise at the heart of the promises these technologies make. 

The flawed premise is this: AI technology is based on the idea that the important part of creating things is the outcome, not the process. Can't draw? That shouldn't stop you from making a picture. Worried about your writing? Why should that stop you from handing in a coherent essay? The ads for AI all promise that you'll be able to produce things without all the tedious work of actually producing it - isn't that great? 

Well no, it's not - it's terrible. It betrays a fundamental misunderstanding of why creating things has value. It's terrible in general, but I am especially offended by this idea in the context of education, and in this post I want to lay this idea out in a little detail.