Wednesday, 27 April 2016

Information use is shaped by bodily dynamics

I've just discovered a treasure trove of 30 talks recorded at the 2015 International Conference on Perception and Action (the main US ecological psychology conference). I just watched this one by Brett Fajen on some work he's done on how far ahead you have to look in order to control walking over irregular terrain. The answer is 'just far enough ahead so you can parameterise the passive dynamics of the walking system and then leave it to execute the step without additional control requirements'. It's a cool talk, some fun data and it's been tied to some cool simulations of the relevant dynamics. (Edit: Brett emailed and asked that I give lots of credit to his student John Matthis, now a post-doc at the University of Texas, for the coolness of this project!)



This is a nice empirical demonstration of the kind of hard core embodied cognition that the ecological approach involves. Embodied cognition in all it's forms is roughly the hypothesis that the form of our bodies and our engagement with the world shape cognition. This means that if you want to understand cognition, you have to understand what kind of contribution the body is making so that you know what's left over for things like representations to do. Fajen's study gets serious about quantifying what the body contributes to performance of this task and uses that to learn a lot about what perception has left to do. The net result is that human locomotion becomes extremely efficient - control pops in and out as required and the rest is effectively for free. 

The strong 'replacement', 'radical' argument is that embodiment changes the game so much that what's left over to do, if anything, doesn't need things like representations. This talk isn't directly about these underlying issues. But it is a nice data set about how our perceptual engagement with the world (specifically, where and when we look around us as we locomote through a cluttered environment) is shaped and tuned so as to provide information in 'just-in-time' fashion so as to control a particular dynamical device with maximum efficiency. There's no planning, modelling, rehearsing, predicting - there's just carefully timed perception-action loops shaped by the dynamics of the task at hand. This is, in essence, what we think is going on all the time for basically everything.

This talk won't convince anyone to be radical anything if you aren't already; after all, it's still "merely" perception and action, not the juicy stuff like language. That's fine. But it's a nice example of all the pieces of this kind of research programme, plus I'm getting increasingly interested in Brett's work more generally anyway, so I thought I'd link to it here. 

References
Fajen, B. R. (2013). Guiding locomotion in complex dynamic environments. Frontiers in Behavioral Neuroscience, 7:85. 

Matthis, J. S., Barton, S. B., & Fajen, B. R. (2015). The biomechanics of walking shape the use of visual information during locomotion over complex terrain. Journal of Vision, 15(3), 10.


Matthis, J. S., & Fajen, B. R. (2014). Visual control of foot placement when walking over complex terrain. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 106-115.

Tuesday, 19 April 2016

Brains learn to perceive, not predict, the world

The current trendy way to think about brains is as predictive machines. Bayesian methods, free energy, generative models, and all the other things in Andy Clark's new book are where it's at. Brains have to predict because, in Clark's terms, we are 'surfing uncertainty' - one step from disaster as we control our behaviour using sensory information we had to spend time processing and that is therefore a few hundred milliseconds out of date. Our brains build models of the world that constantly generate active predictions about what sensation will tell us next, and our behaviour holds together to the extent that the models do.

The ecological approach provides a different job description for the brain (Charles, Golonka & Wilson, 2014). We are embedded in environments rich in information that specifies a wide variety of behaviourally relevant properties. Some of that information is prospective, that is, it is information available now that is about something in the future. Two examples are the information variables you can use to catch a fly ball; both are states of the current time that, when used, allow you to achieve a behaviour in the future (specifically, to be in the right place in the right time to catch that ball). Another example is tau and the various other variables for time-to-collision.

This post reviews a paper (van der Meer, Svantesson & van der Weel, 2012) that measured visual evoked potentials in infants longitudinally at two ages, using stimuli that 'loomed' (i.e. looked like they were going to collide with the infants). The data show that the infant brains were not learning to predict the world. Instead, neural activity became more tightly coupled to information about the time-to-collision. We learn to perceive, not predict, the world.

Thursday, 14 April 2016

Peer Review: Solemn Duty or Merely Extra Work?

The other day on Twitter I saw Rolf Zwaan tweeting about Collabra's policy of rewarding reviewers with credit that can be traded in for credit for organisational article processing fees (APC) budgets, into Collabra's fee waiver account, or for cash to the reviewer. This idea, of paying reviewers for reviewing, comes up now and again on Twitter and it always sounds wrong to me. My gut response is that we should not be paying individuals money to review articles.

So I did the normal thing and posted a Twitter poll, with the following results:
Turns out I am in the minority! The various discussion I've had around this tweet have been kind of interesting too. 

Saturday, 9 April 2016

The Art of the New Collaboration

I love working with other people. It keeps me connected to a wide range of topics and drives me to push my work in ways I would never other think to do. I've had many wonderful conversations with people over the years in which we've thrown around ideas for collaborative projects. Often, however, these simply never get off the ground and I've struggled for years to figure out a way to turn good ideas into genuine collaborations. 

I have two new collaborations on the go right now and these are finally working, so I thought I'd write down what I think I finally did right that got these moving. I'd love to hear your thoughts and ideas about how to start new collaborations; I suspect there are many scientists out there, young and old, who would be very interested in what we think works.

My main thought is this. A new collaboration is, by definition, a new project that brings together the skills and resources of at least two groups. These two groups will know their stuff, but not necessarily the others, and that means there's work to be done bringing people up to speed. In addition, each group will have existing interests and projects, and that means finding a way to implement some new, probably unfunded work using what you have to hand without interfering with the existing work. 

My current solution that seems to be working: choose ONE simple, easy to implement project and work it through to completion.