Tuesday, 17 June 2025

Lecture 12: The Space Enigmas IV: On Learning Space Perception (Turvey, 2019, Lectures on Perception)

This Lecture is in roughly two parts. The first brief section walks through a Helmhotzian method for perceiving depth via unconscious inference. This inference process is learned (Helmholtz wanted to be an empiricist) but as usual entail loans of intelligence in the form of some givens not acquired via experience. This then raises a question: what exactly is experience, and what about it is used to be the basis for future inference? It turns out what counts as the relevant parts of experience can be very non-obvious, raising many problems that need more modern, less Cartesian solutions. 

Lecture 11: Doctrines of Sensations and Unconscious Inferences (Turvey, 2019, Lectures on Perception)

This Lecture is about Helmholtz, and his theory of sensations being integrated into perceptual experience via unconscious inference. Everything in here should look very familiar to anyone who has ever taken a Sensation and Perception class, because modern theories of indirect perception are literally versions of this account. (I'll note as well that, while there are embodied and ecological interpretations of the free energy principle (e.g. Bruineberg et al, 2018), one common interpretation is that it is an implementation of Helmholtz's unconscious inference; e.g. Friston and Kiebel 2009; Hohwy, 2013). 

Unconscious inference is a firmly Cartesian programme. It is about using the physiological measurements of things like light, and inferring their underlying causes (i.e. what is happening in the world to be perceived). Unsurprisingly, it is going to run into the same main problem, namely unrepayable loans of intelligence. 

Thursday, 22 May 2025

Lecture 10: The Space Enigmas III: Local Signs and Geometrical Empiricism (Turvey, 2019, Lectures on Perception)

The previous two lectures have covered aspects of the problem of space perception. We perceive objects, and these inhabit space - they are at some distance from us and each other, etc. So what is space, and how do we come to know about it? We've tried using touch (a sense that lives in 3D) to provide the necessary clues (this was Berkeley, and the story of Flatland). This doesn't work because touch isn't a perfectly reliable source of space information, it's a perceptual system like vision and faces the same kinds of problems. We've also tried to just make space a necessary feature; not an object of perception, but a mode of perception (this was Kant, and the power of Euclidean geometry). This doesn't work, because there are perfectly coherent non-Euclidean geometries, that all provide different answers when measuring the same space. As soon as there is more than one geometry, choosing one becomes part of the problem and you cannot simply assume it. 

Given this, how might you go about selecting the right geometrical description to use as the basis for your experience of space? Turvey discusses one major attempt to show how this might happen, specifically Helmholtz's account of how to derive a geometry (a notion of space) from non-spatial local signs

This Lecture involves quite a bit of mathematical detail, which I am not going to get into here because it's slightly besides the point. The overall goal, however, is to take some sensory raw material that isn't intrinsically about space (because it can't be), and work to turn it into an experience that is about space. This is going to be the start of the move to talk about sensations and perception, the organising principle still at the heart of all non-ecological discussions of how we come to experience the world. More on this in the next lecture. 

Friday, 22 November 2024

Generative AI Pushes Outcome Over Process (And This Is Why I Hate It)

I really hate generative AI, because there are many reasons to hate it. It's abilities depend on stolen data; it uses so much electricity it's messing with climate goals and made them restart 3 Mile Island; it's riddled with all the biases you'd expect; and the sales pitch from companies with billions of dollars on the line has smelled of snake oil from day one. 

I'm also mad at it for professional reasons. 

First, I am an ecological psychologist, so I have things to say about whether these systems can be intelligent. They can't: they are made only of language, and intelligence is made of much more than this. Also their basic trick (unreflective extrapolation of the next most likely thing to say) isn't even how language works, let alone intelligence. 

But second, for the purposes of this post, I am mostly mad about it as an educator. My students are deluged with AI tools. Grammarly relies on it; Adobe keeps trying to summarise pdfs for them; and ChatGPT promises to help them write better essays. My students are busy and stressed: they're often working jobs to support themselves, and their courses ask a lot of them too. We know they take shortcuts, and AI is the most powerful one they've ever had access to. Universities are busy drafting and redrafting policy documents about fair use of AI, because we have no way to enforce a ban on its use, but even these documents accept the flawed premise at the heart of the promises these technologies make. 

The flawed premise is this: AI technology is based on the idea that the important part of creating things is the outcome, not the process. Can't draw? That shouldn't stop you from making a picture. Worried about your writing? Why should that stop you from handing in a coherent essay? The ads for AI all promise that you'll be able to produce things without all the tedious work of actually producing it - isn't that great? 

Well no, it's not - it's terrible. It betrays a fundamental misunderstanding of why creating things has value. It's terrible in general, but I am especially offended by this idea in the context of education, and in this post I want to lay this idea out in a little detail. 

Tuesday, 14 November 2023

Do Affordances Select or Control Actions?

I've been working for a while to connect the perception of affordances to motor abundance methods such as the uncontrolled manifold. I proposed the idea in this book chapter, and then I have a paper under review that is my first swing at making this work; the details of the hypothesis are in those, and I recommend you read them. The paper in particular has my most recent thinking spelled out as clearly as I can in the Introduction and Discussion sections. 

As I've been chatting to people about progressing this programme, one thing that keeps coming up is 'why has no-one thought to do this before?'. This link simply seems so obvious to me, and to the people I'm talking to, but no-one has done it. I think I may have figured out why, though, and I thought I'd document the reason because I'm a sucker for these kinds of mysteries, especially when I think I've solved them. 

I think the reason no-one has thought to connect affordances to UCM is that (almost) no ecological psychologist has been studying affordances as a solution to the control problem! Affordances are studied as solutions to the action selection problem only, and this seems to come from Warren. I'll link to some papers and lay out some thoughts for the record here. 

Thursday, 18 May 2023

Effectivities for Reach-to-Grasp Actions

I just reviewed the affordance properties that produce the spatial structure in reach-to-grasp actions, and there's an unquestioned assumption lurking in that analysis. Luckily, Mon-Williams & Bingham (2011) actually did question it, so I wanted to cover that part of the paper here.

The assumption in the analyses I described last time is that the unit of action is the aperture between the finger and thumb, and not the fingers themselves. Bingham refers to this as an opposition vector (Iberall, Bingham, & Arbib, 1986; van Bergen et al, 2007). In some ways, this is a weird idea; the action system working to control a space between limbs, and not the limbs! Smeets & Brenner (1999) proposed that grasping is actually about the two limbs. Mon-Williams & Bingham tested these hypotheses and found evidence in favour of the opposition vector.

I want to walk through this in a little detail, though, as of course identifying the relevant elemental variables is part of an UCM analysis, and affordance research helps here too. The task analysis that reveals affordance property options also points to effectivity property options (at least it should - these are complementary after all!). But another part of the UCM approach is that it can, in principle, test hypotheses about elemental and performance variables, so I want to lay this out as well.

Tuesday, 16 May 2023

Affordances for the Spatial Structure of Reach-To-Grasp (Mon-Williams & Bingham, 2011)

I have reviewed the spatial and temporal structure of reach-to-grasp movements, and the task dynamical analysis that has connected affordance properties and reach-to-grasp effectivities. Now it's time to work through some papers exploring this task using this analysis.

The first paper is Mon-Williams & Bingham (2011). The goal of this paper was to investigate what target properties shape the spatial structure of the reach-to-grasp movement. This means the behaviour of the hand as it forms a grip aperture and then encloses and contacts the object. Specifically, we want to examine the maximum grip aperture (MGA), which occurs partway through the reach and is larger than the object, and the terminal grip aperture (TGA), which occurs when the hand has stopped moving and the hand encloses the object, but before the fingers are in contact with the object. The question is, what object properties determine how these apertures are scaled?