Thursday, 22 May 2025

Lecture 10: The Space Enigmas III: Local Signs and Geometrical Empiricism (Turvey, 2019, Lectures on Perception)

The previous two lectures have covered aspects of the problem of space perception. We perceive objects, and these inhabit space - they are at some distance from us and each other, etc. So what is space, and how do we come to know about it? We've tried using touch (a sense that lives in 3D) to provide the necessary clues (this was Berkeley, and the story of Flatland). This doesn't work because touch isn't a perfectly reliable source of space information, it's a perceptual system like vision and faces the same kinds of problems. We've also tried to just make space a necessary feature; not an object of perception, but a mode of perception (this was Kant, and the power of Euclidean geometry). This doesn't work, because there are perfectly coherent non-Euclidean geometries, that all provide different answers when measuring the same space. As soon as there is more than one geometry, choosing one becomes part of the problem and you cannot simply assume it. 

Given this, how might you go about selecting the right geometrical description to use as the basis for your experience of space? Turvey discusses one major attempt to show how this might happen, specifically Helmholtz's account of how to derive a geometry (a notion of space) from non-spatial local signs

This Lecture involves quite a bit of mathematical detail, which I am not going to get into here because it's slightly besides the point. The overall goal, however, is to take some sensory raw material that isn't intrinsically about space (because it can't be), and work to turn it into an experience that is about space. This is going to be the start of the move to talk about sensations and perception, the organising principle still at the heart of all non-ecological discussions of how we come to experience the world. More on this in the next lecture. 

Friday, 22 November 2024

Generative AI Pushes Outcome Over Process (And This Is Why I Hate It)

I really hate generative AI, because there are many reasons to hate it. It's abilities depend on stolen data; it uses so much electricity it's messing with climate goals and made them restart 3 Mile Island; it's riddled with all the biases you'd expect; and the sales pitch from companies with billions of dollars on the line has smelled of snake oil from day one. 

I'm also mad at it for professional reasons. 

First, I am an ecological psychologist, so I have things to say about whether these systems can be intelligent. They can't: they are made only of language, and intelligence is made of much more than this. Also their basic trick (unreflective extrapolation of the next most likely thing to say) isn't even how language works, let alone intelligence. 

But second, for the purposes of this post, I am mostly mad about it as an educator. My students are deluged with AI tools. Grammarly relies on it; Adobe keeps trying to summarise pdfs for them; and ChatGPT promises to help them write better essays. My students are busy and stressed: they're often working jobs to support themselves, and their courses ask a lot of them too. We know they take shortcuts, and AI is the most powerful one they've ever had access to. Universities are busy drafting and redrafting policy documents about fair use of AI, because we have no way to enforce a ban on its use, but even these documents accept the flawed premise at the heart of the promises these technologies make. 

The flawed premise is this: AI technology is based on the idea that the important part of creating things is the outcome, not the process. Can't draw? That shouldn't stop you from making a picture. Worried about your writing? Why should that stop you from handing in a coherent essay? The ads for AI all promise that you'll be able to produce things without all the tedious work of actually producing it - isn't that great? 

Well no, it's not - it's terrible. It betrays a fundamental misunderstanding of why creating things has value. It's terrible in general, but I am especially offended by this idea in the context of education, and in this post I want to lay this idea out in a little detail. 

Tuesday, 14 November 2023

Do Affordances Select or Control Actions?

I've been working for a while to connect the perception of affordances to motor abundance methods such as the uncontrolled manifold. I proposed the idea in this book chapter, and then I have a paper under review that is my first swing at making this work; the details of the hypothesis are in those, and I recommend you read them. The paper in particular has my most recent thinking spelled out as clearly as I can in the Introduction and Discussion sections. 

As I've been chatting to people about progressing this programme, one thing that keeps coming up is 'why has no-one thought to do this before?'. This link simply seems so obvious to me, and to the people I'm talking to, but no-one has done it. I think I may have figured out why, though, and I thought I'd document the reason because I'm a sucker for these kinds of mysteries, especially when I think I've solved them. 

I think the reason no-one has thought to connect affordances to UCM is that (almost) no ecological psychologist has been studying affordances as a solution to the control problem! Affordances are studied as solutions to the action selection problem only, and this seems to come from Warren. I'll link to some papers and lay out some thoughts for the record here. 

Thursday, 18 May 2023

Effectivities for Reach-to-Grasp Actions

I just reviewed the affordance properties that produce the spatial structure in reach-to-grasp actions, and there's an unquestioned assumption lurking in that analysis. Luckily, Mon-Williams & Bingham (2011) actually did question it, so I wanted to cover that part of the paper here.

The assumption in the analyses I described last time is that the unit of action is the aperture between the finger and thumb, and not the fingers themselves. Bingham refers to this as an opposition vector (Iberall, Bingham, & Arbib, 1986; van Bergen et al, 2007). In some ways, this is a weird idea; the action system working to control a space between limbs, and not the limbs! Smeets & Brenner (1999) proposed that grasping is actually about the two limbs. Mon-Williams & Bingham tested these hypotheses and found evidence in favour of the opposition vector.

I want to walk through this in a little detail, though, as of course identifying the relevant elemental variables is part of an UCM analysis, and affordance research helps here too. The task analysis that reveals affordance property options also points to effectivity property options (at least it should - these are complementary after all!). But another part of the UCM approach is that it can, in principle, test hypotheses about elemental and performance variables, so I want to lay this out as well.

Tuesday, 16 May 2023

Affordances for the Spatial Structure of Reach-To-Grasp (Mon-Williams & Bingham, 2011)

I have reviewed the spatial and temporal structure of reach-to-grasp movements, and the task dynamical analysis that has connected affordance properties and reach-to-grasp effectivities. Now it's time to work through some papers exploring this task using this analysis.

The first paper is Mon-Williams & Bingham (2011). The goal of this paper was to investigate what target properties shape the spatial structure of the reach-to-grasp movement. This means the behaviour of the hand as it forms a grip aperture and then encloses and contacts the object. Specifically, we want to examine the maximum grip aperture (MGA), which occurs partway through the reach and is larger than the object, and the terminal grip aperture (TGA), which occurs when the hand has stopped moving and the hand encloses the object, but before the fingers are in contact with the object. The question is, what object properties determine how these apertures are scaled? 

Thursday, 11 May 2023

The Task Dynamics of Reaching-to-Grasp

In the last post, I reviewed the basic form of the reach-to-grasp task and the basic spatial and temporal structure of the resulting reach-to-grasp action. I'm shortly going to review three papers by Bingham about where all this structure comes from, but first I wanted to sketch out the task analysis those papers will rely on. 

The question at hand is, in the context of reaching-to-grasp an object, what are the relevant object affordances? What follows is derived from Mon-Williams & Bingham (2011), which I will review fully in the next post. I've tried to fully flesh it out, though, to be as complete as possible. The goal is to lay out the likely relevant task dynamics; this leads to specific predictions about which manipulations should affect which parts of the reach-to-grasp action.

Tuesday, 9 May 2023

The Spatial and Temporal Structure of Reach-to-Grasp Movements

This post is part of my current series where I am developing a research programme to formally connect affordances and motor abundance analyses (see the intro post here). The first thing I need is a task that has a formal affordance analysis; my chosen task is reaching-to-grasp (sometimes called prehension). I'm pivoting to this task from throwing for a few reasons. First, I need a simpler task. The logistics of data collection and analysis for throwing tasks are large and I just don't have the resources right now; I do have the kit for these studies. Second, I want to expand my own skill set to a new domain, and reach-to-grasp is another venerable topic. Finally, it means I get to spend time really grappling with other work by Geoff Bingham, who was my PhD supervisor and who remains the person who's work I admire the most. 

This post is about the basic form of the reach-to-grasp movements studied in the lab, what some of the key measures are, and the fact these are sensitive to task demands (suggesting they are emerging from an active perception-action loop). In the next post I will work through the initial task analysis that leads us to candidate affordance properties.