In this first post, I want to draft a response to 'Affordances 2.0', from Chemero's (2009) book Radical Embodied Cognitive Science. I previously blogged this chapter in two parts here and here.
Friday, 16 December 2016
Affordances are Not Relations, Part 1: Chemero (2009)
Affordances are on my mind right now as I develop the throwing research programme, and a major commitment of that work is that affordances are (dispositional) properties of the environment picked out by organisms in the context of tasks. This commitment has become important enough that it's time to get into developing specific arguments against the various 'affordances are relations' papers that are out there. I am working towards a paper summarising my objections to the relations account that also strongly advocates for the properties account on the grounds it enables a lot more science. This will be an occasional series of posts as I read and draft my arguments; as always, feedback welcome.
Labels:
affordances,
Chemero (2009),
dispositions,
dynamic touch,
relations
Saturday, 10 December 2016
The Affordances of Natural vs Designed Environments (A Purple Peril)
I had a conversation with Eric Brymer (@Ericbrymer) a few weeks ago. Eric is a Reader at Leeds Beckett and is an ecologically minded psychologist interested in the effects of the natural world on mental health. We chatted affordances for a while, and despite it being a very interesting chat, I really wasn’t sure if I had anything to say about the differences between the natural and designed worlds.
But I have not been able to stop thinking about this topic, and now I think there’s something fairly cool here.
But I have not been able to stop thinking about this topic, and now I think there’s something fairly cool here.
Saturday, 3 December 2016
Is the Ecological Approach Radical Enough?
Radical enactivists do not just want to get representations out of their explanations for our mental life. They also want to get rid of the notion of content. Hutto & Myin (2013) is the strong version of this claim.
Mental states or processes have content if there are specified conditions of satisfaction and if it can be evaluated for things like truth (i.e. does the thing conveying content doing so accurately or not?). Part of the concern is that ecological psychology is committed to content and thus can’t play with the other radical theories. The evidence is that we talk about things like ‘information about affordances’; that ‘about’ implies content.
van Dijk, Withagen & Bongers (2015) took a swing at defending a content-less ecological psychology. I admire the attempt to get the radical camps on the same page, but at the end of the day I think a) the defence is grounded on the wrong notion of affordances (as relations, instead of dispositions) which means b) I don’t think it works but that c) I don’t think I care. I am as yet unfazed by critiques of content, although I’m happy to hear more on this; frankly Hutto & Myin’s book is a real struggle to read and any clarity people can add, I’ll take.
Mental states or processes have content if there are specified conditions of satisfaction and if it can be evaluated for things like truth (i.e. does the thing conveying content doing so accurately or not?). Part of the concern is that ecological psychology is committed to content and thus can’t play with the other radical theories. The evidence is that we talk about things like ‘information about affordances’; that ‘about’ implies content.
van Dijk, Withagen & Bongers (2015) took a swing at defending a content-less ecological psychology. I admire the attempt to get the radical camps on the same page, but at the end of the day I think a) the defence is grounded on the wrong notion of affordances (as relations, instead of dispositions) which means b) I don’t think it works but that c) I don’t think I care. I am as yet unfazed by critiques of content, although I’m happy to hear more on this; frankly Hutto & Myin’s book is a real struggle to read and any clarity people can add, I’ll take.
Labels:
Bongers,
content,
enactivism,
Hutto,
information,
Myin,
specification,
van Dijk,
Withagen
Friday, 2 December 2016
Scarantino (2003) “Affordances Explained”
Turvey, Shaw, Reed and Mace (1981) laid out an ontology of affordances; a formal account of the kind of things they are. They described them as dispositions, properties of the world constituted by sets of anchoring properties that offered an action to an organism whose dispositions could complement the affordance. Making affordances dispositions makes them real, makes them pre-date the interaction with the organism, and accounts for their odd ‘not doing anything until interacted with’ kind of existence. I am firmly Team Affordances are Dispositions and I have yet to meet an alternative account that supports a science of affordances or even allows them to be perceived.
The literature on dispositions was somewhat limited in 1981, but in 1998 Stephan Mumford published the definitive work on what they are and how they work. I always hoped someone with the necessary philosophy chops would use this work to strengthen the foundations of affordances (I even almost talked a philosopher into doing it!) but it turns out I’m covered. Andrea Scarantino (2003) published ‘Affordances Explained’ and did much of the necessary work, and there are some very useful things in the analysis. This post is me working through this material, translating from the technical philosophy into words I can understand better.
The literature on dispositions was somewhat limited in 1981, but in 1998 Stephan Mumford published the definitive work on what they are and how they work. I always hoped someone with the necessary philosophy chops would use this work to strengthen the foundations of affordances (I even almost talked a philosopher into doing it!) but it turns out I’m covered. Andrea Scarantino (2003) published ‘Affordances Explained’ and did much of the necessary work, and there are some very useful things in the analysis. This post is me working through this material, translating from the technical philosophy into words I can understand better.
Labels:
affordances,
dispositions,
ontology,
Scarantino
Thursday, 17 November 2016
Free Energy: How the F*ck Does That Work, Ecologically?
Karl Friston has spent a lot of time recently developing the free energy principle framework as a way to explain life, behaviour and cognition; you know, biology, and it's become the cool kid on the block in fairly record time.
Crudely, the basic idea of the FEP is that living organisms need to operate within a range for a given process, or else they will be malfunctioning to some extent and might suffer injury or death. Being within the relevant range across all your processes means you are alive and doing well, and so for an organism that has made it this far in evolution those states must be highly probable. Being outside those ranges is therefore less probable, and so if you find yourself outside a range you will be surprised. Your job as a self-sustaining organism can therefore be described as 'work to minimise surprise'.
There is a problem with this formalisation though. The information-theoretic term that formalise 'surprise' is not a thing that any organism can access, so you can't work to control it. Luckily, there is another formal quantity, free energy, that is related to surprise and is always higher than surprise. Free energy is therefore the upper bound on surprise and minimising that upper bound can reduce surprise as well.
All this is currently implemented in an inferential, Bayesian framework that aligns, at least on the surface, with modern representational cognitive science. Andy Clark thinks this is the future, and Jakob Howhy has worked hard to nail this connection down so it won't move. If this is all right, and if the FEP is being successful, perhaps non-representational, non-inferential accounts like ours are going to lose.
A recent paper (Bruineberg, Kiverstein & Rietveld (2016) tries to wedge the FEP and Bayesian psychology apart to allow room for an ecological/enactivist take on the FEP. To be honest, I found the paper a little underwhelming, but it did get me thinking about things, and two questions have emerged.
Before we worry about an ecological account of the FEP, we need to know 1) whether such a thing makes any sense and 2) whether it adds anything new to the proceedings. All comments welcome - these are genuine questions and if there are answers we would love to know.
Crudely, the basic idea of the FEP is that living organisms need to operate within a range for a given process, or else they will be malfunctioning to some extent and might suffer injury or death. Being within the relevant range across all your processes means you are alive and doing well, and so for an organism that has made it this far in evolution those states must be highly probable. Being outside those ranges is therefore less probable, and so if you find yourself outside a range you will be surprised. Your job as a self-sustaining organism can therefore be described as 'work to minimise surprise'.
There is a problem with this formalisation though. The information-theoretic term that formalise 'surprise' is not a thing that any organism can access, so you can't work to control it. Luckily, there is another formal quantity, free energy, that is related to surprise and is always higher than surprise. Free energy is therefore the upper bound on surprise and minimising that upper bound can reduce surprise as well.
All this is currently implemented in an inferential, Bayesian framework that aligns, at least on the surface, with modern representational cognitive science. Andy Clark thinks this is the future, and Jakob Howhy has worked hard to nail this connection down so it won't move. If this is all right, and if the FEP is being successful, perhaps non-representational, non-inferential accounts like ours are going to lose.
A recent paper (Bruineberg, Kiverstein & Rietveld (2016) tries to wedge the FEP and Bayesian psychology apart to allow room for an ecological/enactivist take on the FEP. To be honest, I found the paper a little underwhelming, but it did get me thinking about things, and two questions have emerged.
Before we worry about an ecological account of the FEP, we need to know 1) whether such a thing makes any sense and 2) whether it adds anything new to the proceedings. All comments welcome - these are genuine questions and if there are answers we would love to know.
Tuesday, 8 November 2016
The Field is Full, Just Not of Affordances - A Reply to Rietveld & Kiverstein
I recently posted about relational accounts of affordances and how one way to summarise my objections to them is that they cannot support mechanistic models of cognition. I came to this after reading Rietveld & Kiverstein's 'Landscape of Affordances' paper and chatting to them both at EWEP14. Eric and Julian have been kind enough to send through some detailed comments (beginning here and split over three comments due to character limits). This post is me replying to these comments as a way to get them somewhere a little more visible. I haven't gone point by point, I've just pulled out the key stuff I wanted to address; read their comments for the whole thing. I appreciate their willingness to get into this with me; their account is becoming wildly influential and their papers and feedback are helping me immensely as I work to articulate my concerns.
To preview: my fundamental objection remains the same and as yet unanswered - while it is indeed possible to identify relations between 'forms of life' and 'socio-cultural environments' there is, as yet, no evidence that these relations create perceptual information. If they do not create information, they are not ecologically perceived, and they cannot figure in the online coordination and control of behaviour. And if they can't do that, then they sure as hell aren't affordances.
So my challenge to Reitveld & Kiverstein (R&K) is this - work up an example of an affordance that fits their definition and not mine and that creates information. Then we can test to see whether people act as if they perceive that affordance and can try perturbing the information to confirm how they are perceiving it. Then, and only then, do we have a ball game.
To preview: my fundamental objection remains the same and as yet unanswered - while it is indeed possible to identify relations between 'forms of life' and 'socio-cultural environments' there is, as yet, no evidence that these relations create perceptual information. If they do not create information, they are not ecologically perceived, and they cannot figure in the online coordination and control of behaviour. And if they can't do that, then they sure as hell aren't affordances.
So my challenge to Reitveld & Kiverstein (R&K) is this - work up an example of an affordance that fits their definition and not mine and that creates information. Then we can test to see whether people act as if they perceive that affordance and can try perturbing the information to confirm how they are perceiving it. Then, and only then, do we have a ball game.
Friday, 28 October 2016
Nonlinear Covariation Analysis (Müller & Sternad, 2003)
I have been working my way through some analyses that fall under the idea of the motor abundance hypothesis (Latash, 2012) - the idea that motor control does not work to produce a single, optimal movement trajectory, but rather works to produce a particular task goal, or outcome. Motor control preserves function, and not structure; it exhibits degeneracy. So far I have looked at uncontrolled manifold analysis here and here, and stochastic optimal control theory here.
This post will review nonlinear covariation analysis developed by Müller & Sternad (2003). This purports to address several issues with UCM.
Thursday, 13 October 2016
Optimal Feedback Control and Its Relation to Uncontrolled Manifold Analysis
Motor control theories must propose solutions to the degrees of freedom problem, which is the fact that the movement system has more ways to move than are ever required to perform a given task. This creates a problem for action selection (which of the many ways to do something do you choose?) and a problem for action control (how do you create stable, repeatable movements using such a high dimensional system?).
Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory.
Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.
The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.
What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!
Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory.
Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.
The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.
What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!
Tuesday, 11 October 2016
What Can You Do With Uncontrolled Manifold Analysis?
There is generally more than one way to perform a task (the ‘bliss of motor abundance’) and so it’s possible for a movement to incur a little noise that doesn’t actually affect performance that much.
Uncontrolled manifold analysis (UCM) is a technique for analysing a high-dimensional movement data set with respect to the outcome or outcomes that count as successful behaviour in a task. It measures the variability in the data with respect to the outcome and decomposes it into variability that, if unchecked, would lead to an error and variability that still allows a successful movement.
In the analysis, variability that doesn’t stop successful behaviour lives on a manifold. This is the subspace of the values of the performance variable(s) that lead to success. When variability in one movement variables (e.g. a joint angle, or a force output) is offset by a compensation in one or more other variables that keeps you in that subspace, these variables are in a synergy and this means the variability does not have to be actively controlled. This subspace therefore becomes the uncontrolled manifold. Variability that takes you off the manifold takes you into a region of the parameter space that leads to failure, so it needs to be fixed. This is noise that needs control.
With practice, both kinds of variability tend to decrease. You produce particular versions of the movement more reliably (decreasing manifold variance, or V-UCM) and you get better at staying on the manifold (decreasing variance living in the subspace orthogonal to the UCM, or V-ORT). V-UCM decreases less, however (motor abundance) so the ratio between the two changes. Practice therefore makes you better at the movement, and better at allocating your control of the movement to the problematic variability. This helps address the degrees of freedom control problem.
My current interest is figuring out the details of this and related analyses in order to apply it to throwing. For this post, I will therefore review a paper using UCM on throwing and pull out the things I want to be able to do. All and any advice welcome!
Uncontrolled manifold analysis (UCM) is a technique for analysing a high-dimensional movement data set with respect to the outcome or outcomes that count as successful behaviour in a task. It measures the variability in the data with respect to the outcome and decomposes it into variability that, if unchecked, would lead to an error and variability that still allows a successful movement.
In the analysis, variability that doesn’t stop successful behaviour lives on a manifold. This is the subspace of the values of the performance variable(s) that lead to success. When variability in one movement variables (e.g. a joint angle, or a force output) is offset by a compensation in one or more other variables that keeps you in that subspace, these variables are in a synergy and this means the variability does not have to be actively controlled. This subspace therefore becomes the uncontrolled manifold. Variability that takes you off the manifold takes you into a region of the parameter space that leads to failure, so it needs to be fixed. This is noise that needs control.
With practice, both kinds of variability tend to decrease. You produce particular versions of the movement more reliably (decreasing manifold variance, or V-UCM) and you get better at staying on the manifold (decreasing variance living in the subspace orthogonal to the UCM, or V-ORT). V-UCM decreases less, however (motor abundance) so the ratio between the two changes. Practice therefore makes you better at the movement, and better at allocating your control of the movement to the problematic variability. This helps address the degrees of freedom control problem.
My current interest is figuring out the details of this and related analyses in order to apply it to throwing. For this post, I will therefore review a paper using UCM on throwing and pull out the things I want to be able to do. All and any advice welcome!
Labels:
learning,
Scholz,
throwing,
UCM,
Uncontrolled Manifold analysis
Thursday, 15 September 2016
Uncontrolled Manifold Analysis
Human movement is hard to study, because there are many ways to perform even simple tasks and given the opportunity, different people will take different routes. It becomes hard to talk sensibly about average performance, or typical performance, or even best performance.
This fact - that the action system contains more elements than are needed to solve a given task - was first formalised by Bernstein as the degrees of freedom problem. Anything that can change state is a degree of freedom that can contribute to movement stability and if you have more than you need then there is immediately more than one way to perform a task. This means you have to select the best action, and even then there are always variations in the details of how you perform that action (Bernstein called this 'repetition without repetition'). From this perspective, selecting the right action means freezing out redundant degrees of freedom and working with just the ones you need.
A more recent way to think about the problem is as the bliss of motor abundance (Gelfand & Latash, 1998; Latash, 2012; see this recent post too). From this perspective, selecting the right action is about balancing the contributions of all the degrees of freedom so that the overall behaviour of the system produces the required outcome. Nothing is frozen out, but errors incurred by one degree of freedom are compensated for by changes in other degrees of freedom. If (and only if) this compensation happens, then you have a synergy in action.
This analysis leads to a prediction and an analysis. It predicts that there are two kinds of movement variability - variability that pulls you away from your target state and variability that doesn't. The former is a problem that must be corrected by another element in the synergy compensating. Successful movement requires clamping down on this variability. The latter requires no correction, no control, and successful movements can still happen even if this variability is high. An analysis of movement then follows. You can decompose the variability of movement in the total state space of that movement into that which pulls you away from the target, and that which does not. Successful movement lives on a subspace of the total space of possible values of your degrees of freedom. If the ratio of the 'good' variability to the 'bad' variability is high, you are hanging out close to that subsapce and working to keep yourself there, although not working to keep yourself doing anything in particular. You have a system that is working to compensate for 'bad' variability while ignoring the rest; a synergy defined with respect to the task demands.
This subspace is referred to as the uncontrolled manifold. It is uncontrolled because when the system is in this subspace of it's total state space, it does not work to correct any variability because that variability is not affecting the outcome. Control only kicks in when you come off the manifold.
This fact - that the action system contains more elements than are needed to solve a given task - was first formalised by Bernstein as the degrees of freedom problem. Anything that can change state is a degree of freedom that can contribute to movement stability and if you have more than you need then there is immediately more than one way to perform a task. This means you have to select the best action, and even then there are always variations in the details of how you perform that action (Bernstein called this 'repetition without repetition'). From this perspective, selecting the right action means freezing out redundant degrees of freedom and working with just the ones you need.
A more recent way to think about the problem is as the bliss of motor abundance (Gelfand & Latash, 1998; Latash, 2012; see this recent post too). From this perspective, selecting the right action is about balancing the contributions of all the degrees of freedom so that the overall behaviour of the system produces the required outcome. Nothing is frozen out, but errors incurred by one degree of freedom are compensated for by changes in other degrees of freedom. If (and only if) this compensation happens, then you have a synergy in action.
This analysis leads to a prediction and an analysis. It predicts that there are two kinds of movement variability - variability that pulls you away from your target state and variability that doesn't. The former is a problem that must be corrected by another element in the synergy compensating. Successful movement requires clamping down on this variability. The latter requires no correction, no control, and successful movements can still happen even if this variability is high. An analysis of movement then follows. You can decompose the variability of movement in the total state space of that movement into that which pulls you away from the target, and that which does not. Successful movement lives on a subspace of the total space of possible values of your degrees of freedom. If the ratio of the 'good' variability to the 'bad' variability is high, you are hanging out close to that subsapce and working to keep yourself there, although not working to keep yourself doing anything in particular. You have a system that is working to compensate for 'bad' variability while ignoring the rest; a synergy defined with respect to the task demands.
This subspace is referred to as the uncontrolled manifold. It is uncontrolled because when the system is in this subspace of it's total state space, it does not work to correct any variability because that variability is not affecting the outcome. Control only kicks in when you come off the manifold.
Labels:
Latash,
synergies,
UCM,
Uncontrolled Manifold analysis
Wednesday, 7 September 2016
The Bliss of Motor Abundance
The fundamental question in psychology boils down to "why did that person do that thing they just did?". Welcome to my new favourite example.
Figure 1. I am pretty sure this move totally makes sense |
Wednesday, 10 August 2016
The Affordances of Prehistoric Objects
I have a new paper in press at Scientific Reports (Wilson, Zhu, Barham, Stanistreet & Bingham, 2016; see also the slides from my EWEP14 talk) and I am so excited about it I can hardly cope. This project has been the most pure, good-old-fashioned science fun I've had in years and I'm very proud of the result.
The paper is an affordance analysis of some prehistoric objects (spherical rocks called 'spheroids') that were excavated from a cave in South Africa. There are several theories about what the prehistoric humans used these for, but one is that they were projectiles used for hunting. I created a simulation of projectile motion and set the parameters of the simulation using data from the literature on throwing for maximum distance. We then showed that a majority of our sample spheroids were ideally suited for throwing to inflict useful damage to a medium sized prey animal over fairly long distances. Given that we know humans have been anatomically specialising for throwing for millions of years, and given that we know modern humans can perceive the affordance for throwing to a maximum distance and select objects that best fit this affordance, we argue that these simulations provide evidence that these objects were selected to be used as projectiles.
This slightly out-of-left-field project was the result of some good luck, some careful management and the hard work of all my collaborators as we tied this together. The paper stands as an exemplar and proof-of-concept of how a task-dynamical affordance analysis can tell us about the behaviour of prehistoric humans.
Figure 1. Spheroids |
This slightly out-of-left-field project was the result of some good luck, some careful management and the hard work of all my collaborators as we tied this together. The paper stands as an exemplar and proof-of-concept of how a task-dynamical affordance analysis can tell us about the behaviour of prehistoric humans.
Labels:
affordances,
fun science,
prehistoric tools,
spheroids,
throwing,
tool use
Monday, 25 July 2016
Relational Theories of Affordances are Functional, Not Mechanistic (A Purple Peril)
Everyone and their dog has a theory about how to interpret Gibson's famously vague definition of affordances;
The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill....an affordance is neither an objective property nor a subjective property; or it is both if you like. An affordance cuts across the dichotomy of subjective-objective and helps us to understand its inadequacy. It is equally a fact of the environment and a fact of behavior. It is both physical and psychical, yet neither. An affordance points both ways, to the environment and to the observer.
There are two basic flavours of theories: affordances as dispositional properties of the environment (Turvey, Shaw, Reed & Mace, 1981; Turvey, 1992) and affordances as relational features of the animal-environment system. (A recent paper has just claimed they should be best understood as events, but to be honest I don't really know what's motivating this). The two most recent and popular relational accounts are Chemero's book (Chemero, 2009; see these posts on the relevant chapter) and Rietveld & Kiverstein (2014) in a paper entitled 'A Rich Landscape of Affordances'. Their goal, like most of the relational accounts, is to handle higher-order cognition by scaling up affordances to support it (our move, in contrast, has been to expand the uses of perceptual information; Golonka, 2015; Golonka & Wilson, 2016 preprint).
I am firmly in the 'affordances as dispositional properties' camp (see, for example, the discussion section of my recent throwing paper for an extended analysis). Specifically, they are dynamically defined dispositional properties of objects and events in the context of tasks. The reason is that this is the only way affordances can be the kind of thing that can create information and therefore be perceivable. They have to be 'out there' and made of things that light can bounce off, for example, and relations between organism and environments are not typically such things. In addition, if they do not exist until perceived, we need a story to explain how we come to learn to perceive them, and there is no viable ecological framework that will make this happen (Wilson et al, 2016).
Reading this material with my new mechanism glasses on has given me a new, concise way to identify the problems with these relational accounts:
Affordances-as-relations theories are all functional explanations, and not mechanistic explanations
Thursday, 21 July 2016
Framing the Debate
In 2014 we published a book chapter with Eric Charles in which we argued that the most important thing psychology and neuroscience needed from people like us was a new language in which to talk about the problems we are trying to solve. Our Ecological Representations paper is part of this, and we have a much larger paper in development laying out the more complete set of conceptual tools needed to do ecological psychology across a wider range of problems.
One reason why this is important is a simple fact; we are asking psychology to change and it is up to us to clearly articulate what we want it to change into, or else nothing can happen. A related reason is that without a clear framework, we can't reformulate the questions in a useful way and we're left stuck because we can't explain something like 'theory of mind' because the actual solution is that ToM doesn't exist or need explaining. Ecological neuroscience, for example, will look very different to cognitive neuroscience.
A final reason is that the language in which psychology frames it's understanding of behaviour drives popular understanding of behaviour too. I recently came across my favourite example of this in a tweet by Alice Dreger;
What caught my eye here is her description of her experience is grounded in the myth that you need two eyes in order to perceive in 3D (I bug my students about this in class every year too). The myth is based in the standard image-based analysis of vision which I'll lay out below; but the point I want to make here is that people still describe their experience of monocular vision as 'not being able to see 3D/depth' even though this is inarguably, demonstrably not what is happening in their visual experience. It's like blind echolocators talking about how the sound creates 'an image in their minds'; this is just not the case, but this is the language psychology has provided them for talking about the perceived experience of spatial layout. What fascinates me is that it's trivial to demonstrate that monocular vision allows for 3D perception, but everyone lets the framing override their own experience. This, to me, is a big part of why our work right now is important - we will never make progress until we can reframe the debate.
One reason why this is important is a simple fact; we are asking psychology to change and it is up to us to clearly articulate what we want it to change into, or else nothing can happen. A related reason is that without a clear framework, we can't reformulate the questions in a useful way and we're left stuck because we can't explain something like 'theory of mind' because the actual solution is that ToM doesn't exist or need explaining. Ecological neuroscience, for example, will look very different to cognitive neuroscience.
A final reason is that the language in which psychology frames it's understanding of behaviour drives popular understanding of behaviour too. I recently came across my favourite example of this in a tweet by Alice Dreger;
Dreger, for some reason, spends most of her life only using her right eye, even though her left is perfectly functional. She blogged about it here. Every now and again, something makes her left eye kick in and she suddenly has stereo vision.My dominant eye is very blurry so my left eye has come online (it rarely does) and now I'm seeing 3D. 3D is SO WEIRD. How do you stand it?— Alice Dreger (@AliceDreger) July 19, 2016
What caught my eye here is her description of her experience is grounded in the myth that you need two eyes in order to perceive in 3D (I bug my students about this in class every year too). The myth is based in the standard image-based analysis of vision which I'll lay out below; but the point I want to make here is that people still describe their experience of monocular vision as 'not being able to see 3D/depth' even though this is inarguably, demonstrably not what is happening in their visual experience. It's like blind echolocators talking about how the sound creates 'an image in their minds'; this is just not the case, but this is the language psychology has provided them for talking about the perceived experience of spatial layout. What fascinates me is that it's trivial to demonstrate that monocular vision allows for 3D perception, but everyone lets the framing override their own experience. This, to me, is a big part of why our work right now is important - we will never make progress until we can reframe the debate.
Tuesday, 19 July 2016
Reply to Hamlyn: In detail
This is a detailed reply to the critique Jim Hamlyn wrote critiquing our Ecological Representation preprint (and related blog post). If you aren’t him, then you might want to visit his blog to read his full review before proceeding…
First, Jim, thank you very much for providing such a thorough review of our paper. I hope this lengthy reply can be a part of an ongoing dialogue about these ideas.
I’ve written this the same way I did the one for Sergio Graziosi (here) – replying to each point you make without reading ahead.
Quotations are text taken directly from Hamlyn’s critique and are presented in the order in which they appear in his blog post.
As this sentence comes at the very beginning of your critique, let me take a second to say that I think what we propose here is consistent with a conception of public and intentional representations. The notion of informational representations is both public (in that they are external physical structures) and inherently intentional (in that they specify biologically relevant and, frequently, dispositional properties and, as such, coordinating with respect to this information is fundamentally FOR or ABOUT something, which satisfies most people’s construal of intentionality).
First, Jim, thank you very much for providing such a thorough review of our paper. I hope this lengthy reply can be a part of an ongoing dialogue about these ideas.
I’ve written this the same way I did the one for Sergio Graziosi (here) – replying to each point you make without reading ahead.
Quotations are text taken directly from Hamlyn’s critique and are presented in the order in which they appear in his blog post.
“Hopefully they can be persuaded that this is only true if the required representations are of the fully public and intentional sort and not the neural and non-intentional sort that they seem to have embraced.”
As this sentence comes at the very beginning of your critique, let me take a second to say that I think what we propose here is consistent with a conception of public and intentional representations. The notion of informational representations is both public (in that they are external physical structures) and inherently intentional (in that they specify biologically relevant and, frequently, dispositional properties and, as such, coordinating with respect to this information is fundamentally FOR or ABOUT something, which satisfies most people’s construal of intentionality).
Monday, 18 July 2016
Reply to Graziosi: In detail
This post is a detailed reply to Sergio Graziosi's useful critique of our Ecological Representation pre-print. As such, it's specific to his particular concerns about our argument, but I'm putting it here so that others can join in the discussion. If you are reading this and are not Sergio, you might want to head over to his blog and read the critique first.
First, thank you very much for your detailed critique of the paper. It is incredibly useful and we are sincerely grateful for the time you've taken to comment on our crazy ideas.
A quick note to start with. I am largely writing this reply reading a little bit of what you’ve written at a time because I want to respond to each point you make and to keep myself honest in evaluating your later proposed solutions to some of the problems you’ve identified in the paper (so I can’t shift my positions on basic issues!). So, apologies if my responses to these points aren’t relevant because of something you say later in your reply.
Labels:
ecological representations,
Graziosi
Friday, 24 June 2016
Mechanisms for Cognitive and Behavioural Science (#MechanismWeek 5)
This week we have reviewed what a mechanism is, various ways to model mechanisms, and talked about the kinds of functional and dynamical models cognitive science currently relies on. We then rejected the argument that cognitive science cannot have causal mechanistic models that refer to real parts and processes. We claim that if we ground our models at the ecological level of perceptual information, a truly mechanistic analysis is possible, and we walked through a causal mechanistic model of a perception-action task as proof of concept for our claims. Sabrina then presented these ideas this week at a mechanism conference in Warsaw to an audience that included Bechtel and Craver (with great success, hurray! :)
The message we want you to go home with after #MechanismWeek is this:
The research programme for getting mechanistic models is that laid out in Bechtel & Abrahamsen (2010) who used the development of mechanistic models of circadian rhythms as an exemplar for cognitive science. That programme involves spending time empirically decomposing and localising the real parts and processes of that actual mechanism. This requires going into the mechanism at a useful level of analysis; if you are struggling to find real parts and processes, you might be working at the wrong level. Only once you know the composition and organisation of the mechanism do you try to model it, typically using dynamical equations containing terms serving as representations of each component, placed in the appropriate relation to one another.
We have risen to their challenge by identifying the ecological level of analysis as the correct place to ground our work, and by identifying a cognitive science model that parallels the biological exemplar. It is our hope that this work will help others move in the direction of mechanistic research and modelling in the cognitive and behavioural sciences, so that we all gain the many benefits of causal mechanistic explanations.
The message we want you to go home with after #MechanismWeek is this:
Despite the fact that psychology has been trucking on very-nicely-thank-you developing various kinds of functional models, these remain extremely limited in their explanatory scope and they are not moving us towards explanatory mechanistic models. We have demonstrated that explanatory, causal mechanistic modelling of cognitive and behavioural systems is possible, so long as that analysis is grounded at the level of ecological information. These models are powerful scientific tools for exploring and understanding the behaviour of systems, and if we can get them, we should definitely be trying to.
The research programme for getting mechanistic models is that laid out in Bechtel & Abrahamsen (2010) who used the development of mechanistic models of circadian rhythms as an exemplar for cognitive science. That programme involves spending time empirically decomposing and localising the real parts and processes of that actual mechanism. This requires going into the mechanism at a useful level of analysis; if you are struggling to find real parts and processes, you might be working at the wrong level. Only once you know the composition and organisation of the mechanism do you try to model it, typically using dynamical equations containing terms serving as representations of each component, placed in the appropriate relation to one another.
We have risen to their challenge by identifying the ecological level of analysis as the correct place to ground our work, and by identifying a cognitive science model that parallels the biological exemplar. It is our hope that this work will help others move in the direction of mechanistic research and modelling in the cognitive and behavioural sciences, so that we all gain the many benefits of causal mechanistic explanations.
This work will form the centre piece of a large scale paper we are currently writing. We've posted this part of the work on the blog in part to stake a claim to this analysis, but also to try and garner useful feedback from interested parties. If you have questions, comments or feedback, please contact us by commenting on any relevant post (we'll see it, even if it's on an older post), emailing us or finding us on Twitter.
Thanks for reading along with us! We hope you enjoyed #MechanismWeek :)
Thanks for reading along with us! We hope you enjoyed #MechanismWeek :)
Labels:
Mechanism Week,
mechanisms,
mechanistic models
Thursday, 23 June 2016
Ecological Mechanisms and Models of Mechanisms (#MechanismWeek 4)
Mechanistic models are great, but so far cognitive science doesn't have any. We have functional models (of, for example, memory or categorisation) and dynamical models (of, for example, neural networks) but none of these can support the kind of explanations mechanistic models can. Is that it for psychology, or can we do better?
Here we propose that it's possible to do psychology in a way that allows for the development of explanatory, mechanistic models. The trick, as we have discussed, is to identify the correct level of analysis at which to ground those models. These models will definitely end up being multi-level (Craver, 2007), but the form of these final models will be dictated and constrained by the nature of the real parts and operations at the grounding level.
The correct level of analysis, we propose, is the ecological level. Specifically, ecological information is going to be the real component whose nature will place the necessary constraints on both our empirical investigations of psychological mechanisms as well as the mechanistic models we develop.
Let's see how this might work.
Here we propose that it's possible to do psychology in a way that allows for the development of explanatory, mechanistic models. The trick, as we have discussed, is to identify the correct level of analysis at which to ground those models. These models will definitely end up being multi-level (Craver, 2007), but the form of these final models will be dictated and constrained by the nature of the real parts and operations at the grounding level.
The correct level of analysis, we propose, is the ecological level. Specifically, ecological information is going to be the real component whose nature will place the necessary constraints on both our empirical investigations of psychological mechanisms as well as the mechanistic models we develop.
Let's see how this might work.
Wednesday, 22 June 2016
Do Dynamic Models Explain? (#MechanismWeek 3)
So far we have learned what a mechanism is, two ways of modelling mechanisms (functional and mechanistic) and we've identified that cognitive science is currently dominated entirely by functional models which will never actually turn into mechanistic models without a change in our research priorities.
Bechtel & Abrahamsen (2010) challenged cognitive science to make the move to mechanisms. They laid out the form of the necessary research programme (empirically decompose and localise the actual mechanism, and then model those real components), and they described all the benefits of mechanistic models we might really want to have. The main benefit is a move from mere description to proper explanation of the mechanisms we study, and this certainly seems like something we would want.
But can we do the work needed to get mechanistic models? Can we decompose cognitive systems into sensible components and model the result? Chemero & Silberstein (2008) and Silberstein & Chemero (2010) argue we can't, because cognitive systems are nonlinear and therefore non-decomposable; it makes no sense to break them down into parts because the behaviour of the system is more than the sum of those parts. They do argue, though, dynamical models are appropriate and count as proper explanations. This post will review but ultimately reject this argument.
Bechtel & Abrahamsen (2010) challenged cognitive science to make the move to mechanisms. They laid out the form of the necessary research programme (empirically decompose and localise the actual mechanism, and then model those real components), and they described all the benefits of mechanistic models we might really want to have. The main benefit is a move from mere description to proper explanation of the mechanisms we study, and this certainly seems like something we would want.
But can we do the work needed to get mechanistic models? Can we decompose cognitive systems into sensible components and model the result? Chemero & Silberstein (2008) and Silberstein & Chemero (2010) argue we can't, because cognitive systems are nonlinear and therefore non-decomposable; it makes no sense to break them down into parts because the behaviour of the system is more than the sum of those parts. They do argue, though, dynamical models are appropriate and count as proper explanations. This post will review but ultimately reject this argument.
Labels:
Chemero,
dynamical explantions,
Mechanism Week,
Silberstein
Tuesday, 21 June 2016
Cognitive Models Are Not Mechanistic Models (#MechanismWeek 2)
So far we have talked about what mechanisms are and what sort of model counts as properly mechanistic. The next question is to have a look in more detail at the models of cognitive science and see how far they can take us towards mechanistic explanations.
Last time I discussed the examples of research on memory, visual object recognition and categorisation. This kind of functional modelling work is the rule, not the exception in cognitive science - it's how we're taught to work and how the field moves along.
We are, in effect, doing science backwards: modelling first, running experiments later, and the result is that we are not actually on a trajectory towards mechanistic models, just better functional ones. This is a problem to the extent you want access to the many real benefits mechanistic models offer, in particular the ability to explain rather than simply describe a mechanism (see Bechtel & Abrahamsen, 2010 and the last post). This post reviews whether functional models explain or whether they can be part of a trajectory towards an explanation. The answer, unsurprisingly, will be no.
Last time I discussed the examples of research on memory, visual object recognition and categorisation. This kind of functional modelling work is the rule, not the exception in cognitive science - it's how we're taught to work and how the field moves along.
This kind of program feels like it's heading towards mechanism . Every division into new sub-capacities comes from work showing the two sub-capacities function differently and are therefore the result of different mechanisms. Every new representational model adds a new component (part or process) that handles another part of the capacity. There is one basic problem, however. None of these models make any explicit reference to any real parts or processes that have been empirically identified by other work - for example, 'working memory' still refers to a capacity, not a component. This means there is no reason to think this new capacity maps onto any particular parts and processes or if it does, to which parts and processes.@PsychScientists A mechanism is a graph with at least three boxes and two arrows.— Tim van der Zee (@Research_Tim) May 24, 2016
We are, in effect, doing science backwards: modelling first, running experiments later, and the result is that we are not actually on a trajectory towards mechanistic models, just better functional ones. This is a problem to the extent you want access to the many real benefits mechanistic models offer, in particular the ability to explain rather than simply describe a mechanism (see Bechtel & Abrahamsen, 2010 and the last post). This post reviews whether functional models explain or whether they can be part of a trajectory towards an explanation. The answer, unsurprisingly, will be no.
Monday, 20 June 2016
Mechanisms and Models of Mechanisms (#MechanismWeek 1)
In this first #MechanismWeek post, I will define a mechanism and briefly describe the kind of models of mechanisms you can build. I begin with various kinds of functional models (Cummins, 1975, 2001; Weiskopf, 2011). These either break capacities of systems into more coherent, easily studied sub-capacities (think of breaking memory into long term memory and short term memory as a simple example) or model them with components that may or may not be really implemented in the organism (e.g. geons, exemplars).
I will then introduce the idea of a dynamic causal mechanistic model (Bechtel & Abrahamsen, 2010) which attempt to map model components directly onto the real parts and processes of the mechanism at hand. The argument is that while functional models provide useful descriptions of mechanisms, they do not provide an explanation of that mechanism, and that only mechanistic models can explain.
I will then introduce the idea of a dynamic causal mechanistic model (Bechtel & Abrahamsen, 2010) which attempt to map model components directly onto the real parts and processes of the mechanism at hand. The argument is that while functional models provide useful descriptions of mechanisms, they do not provide an explanation of that mechanism, and that only mechanistic models can explain.
Wednesday, 15 June 2016
Ecological Representations
Funny story. One day, I got a text from Sabrina that said "Holy crap. I think ecological information is a representation." "Uh oh", I thought - "Twitter is gonna be maaaaad". Then we thought, "we should probably write this idea down, see if we can break it". So we wrote, and the funnier thing was....it all just got stronger. The result is a paper we call "Ecological Representations", which we have just uploaded as a preprint to BioRxiv.
In it, we
That said, we are taking one hint from the process before submitting elsewhere, and that is we are clearly having trouble articulating the argument, in part I think because comes out of left field and we're tripping a lot of different knee-jerk reactions. We think the story makes sense but then we're us, so what we need is some fresh eyes. This is where you lovely people come in.
I have made some minor structural revisions to the version that got reviewed to address some of the issues that came up and I have uploaded it as a pre-print to BioRxiv.org. Now, we want your help.
In it, we
- argue that Gibsonian, ecological information meets the criteria to be a representation, then
- predict that this information leads to neural activity that preserves its structure and that also meets the criteria to be a representation, and
- we argue that these two ecological representations (informational and neural representations) can address the three core reasons cognitive science wants representations (getting intentionality/aboutness from a physical system, solving poverty of stimulus and enabling higher order cognition) while
- avoiding the two big problems with mental representations trying to address those motivations (symbol grounding and system-detectable error).
- We then spend a bunch of time getting serious about higher order cognition grounded in information (see, I told you we were working on it!)
That said, we are taking one hint from the process before submitting elsewhere, and that is we are clearly having trouble articulating the argument, in part I think because comes out of left field and we're tripping a lot of different knee-jerk reactions. We think the story makes sense but then we're us, so what we need is some fresh eyes. This is where you lovely people come in.
I have made some minor structural revisions to the version that got reviewed to address some of the issues that came up and I have uploaded it as a pre-print to BioRxiv.org. Now, we want your help.
Labels:
ecological representations,
information,
preprint
Tuesday, 14 June 2016
#MechanismWeek (a week of posts commencing June 20th 2016)
Cognitive science is, in principle, the search to understand the mechanisms that cause our behaviour to look the way that it does. We run experiments designed to figure out the form of the behaviour to be explained, and we propose models that try to account for the behaviour. But how well are we doing, and can we do better?
It turns out that there is a rich and extremely useful philosophical literature about mechanisms. Specifically, there is a lot of clear and accessible work describing what mechanisms are, and, more importantly, how science can go about modelling those mechanisms. This literature has provided us with a wonderfully useful central focus for our ongoing work, and I wanted to walk through the key issues here. (We have covered this topic in a couple of posts - here and here - but there are several interlocking issues that I want to spell out one at a time).
Sabrina is in Warsaw June 23-25th attending the Mechanistic Integration and Unification in Cognitive Science conference, where she will present on how ecological information provides the key to mechanistic explanations in psychology.
To celebrate, there will be a new post from her every day of the week commencing June 20th 2016 on the topic of mechanisms, specifically cognitive mechanisms, and how to model them. Below is a list describing the upcoming posts and providing links to some useful reading.
We'd like to invite you all to play along, either in the comments or on Twitter (where we live as @PsychScientists; suggested hashtag #MechanismWeek). I've sketched out the week below, with recommended readings so yo can play along.
It turns out that there is a rich and extremely useful philosophical literature about mechanisms. Specifically, there is a lot of clear and accessible work describing what mechanisms are, and, more importantly, how science can go about modelling those mechanisms. This literature has provided us with a wonderfully useful central focus for our ongoing work, and I wanted to walk through the key issues here. (We have covered this topic in a couple of posts - here and here - but there are several interlocking issues that I want to spell out one at a time).
Sabrina is in Warsaw June 23-25th attending the Mechanistic Integration and Unification in Cognitive Science conference, where she will present on how ecological information provides the key to mechanistic explanations in psychology.
To celebrate, there will be a new post from her every day of the week commencing June 20th 2016 on the topic of mechanisms, specifically cognitive mechanisms, and how to model them. Below is a list describing the upcoming posts and providing links to some useful reading.
We'd like to invite you all to play along, either in the comments or on Twitter (where we live as @PsychScientists; suggested hashtag #MechanismWeek). I've sketched out the week below, with recommended readings so yo can play along.
Labels:
mechanism,
Mechanism Week,
reading group
Wednesday, 11 May 2016
The Shrunken Finger Illusion
Ed Yong has a great write-up of an interesting little study in Current Biology (Ekroll, Sayim, Vander Hallen & Wagemans, 2016) that caught my eye. The study reports an illusion (the 'shrunken finger illusion') that shows how amodal volume completion can make you feel like your finger has shrunk, and everyone is very excited about how this shows our experience of the hidden back-sides of objects is "real".
In this post, I'll review the results, do a little ecological finger wagging about the breathless write-up (Ekroll's, not Ed's) and think about some studies the ecological reframing of the effect might motivate. Briefly, I think this effect is definitely real, and that we really do genuinely perceive hidden objects under certain circumstances. Of course, this has nothing to do with amodal mental representations of what we think is there and everything to do with the information the system is interacting with, but you know that of course because this is always the answer!
In this post, I'll review the results, do a little ecological finger wagging about the breathless write-up (Ekroll's, not Ed's) and think about some studies the ecological reframing of the effect might motivate. Briefly, I think this effect is definitely real, and that we really do genuinely perceive hidden objects under certain circumstances. Of course, this has nothing to do with amodal mental representations of what we think is there and everything to do with the information the system is interacting with, but you know that of course because this is always the answer!
Wednesday, 27 April 2016
Information use is shaped by bodily dynamics
I've just discovered a treasure trove of 30 talks recorded at the 2015 International Conference on Perception and Action (the main US ecological psychology conference). I just watched this one by Brett Fajen on some work he's done on how far ahead you have to look in order to control walking over irregular terrain. The answer is 'just far enough ahead so you can parameterise the passive dynamics of the walking system and then leave it to execute the step without additional control requirements'. It's a cool talk, some fun data and it's been tied to some cool simulations of the relevant dynamics. (Edit: Brett emailed and asked that I give lots of credit to his student John Matthis, now a post-doc at the University of Texas, for the coolness of this project!)
This is a nice empirical demonstration of the kind of hard core embodied cognition that the ecological approach involves. Embodied cognition in all it's forms is roughly the hypothesis that the form of our bodies and our engagement with the world shape cognition. This means that if you want to understand cognition, you have to understand what kind of contribution the body is making so that you know what's left over for things like representations to do. Fajen's study gets serious about quantifying what the body contributes to performance of this task and uses that to learn a lot about what perception has left to do. The net result is that human locomotion becomes extremely efficient - control pops in and out as required and the rest is effectively for free.
The strong 'replacement', 'radical' argument is that embodiment changes the game so much that what's left over to do, if anything, doesn't need things like representations. This talk isn't directly about these underlying issues. But it is a nice data set about how our perceptual engagement with the world (specifically, where and when we look around us as we locomote through a cluttered environment) is shaped and tuned so as to provide information in 'just-in-time' fashion so as to control a particular dynamical device with maximum efficiency. There's no planning, modelling, rehearsing, predicting - there's just carefully timed perception-action loops shaped by the dynamics of the task at hand. This is, in essence, what we think is going on all the time for basically everything.
This talk won't convince anyone to be radical anything if you aren't already; after all, it's still "merely" perception and action, not the juicy stuff like language. That's fine. But it's a nice example of all the pieces of this kind of research programme, plus I'm getting increasingly interested in Brett's work more generally anyway, so I thought I'd link to it here.
References
Fajen, B. R. (2013). Guiding locomotion in complex dynamic environments. Frontiers in Behavioral Neuroscience, 7:85.
Matthis, J. S., Barton, S. B., & Fajen, B. R. (2015). The biomechanics of walking shape the use of visual information during locomotion over complex terrain. Journal of Vision, 15(3), 10.
Matthis, J. S., & Fajen, B. R. (2014). Visual control of foot placement when walking over complex terrain. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 106-115.
This is a nice empirical demonstration of the kind of hard core embodied cognition that the ecological approach involves. Embodied cognition in all it's forms is roughly the hypothesis that the form of our bodies and our engagement with the world shape cognition. This means that if you want to understand cognition, you have to understand what kind of contribution the body is making so that you know what's left over for things like representations to do. Fajen's study gets serious about quantifying what the body contributes to performance of this task and uses that to learn a lot about what perception has left to do. The net result is that human locomotion becomes extremely efficient - control pops in and out as required and the rest is effectively for free.
The strong 'replacement', 'radical' argument is that embodiment changes the game so much that what's left over to do, if anything, doesn't need things like representations. This talk isn't directly about these underlying issues. But it is a nice data set about how our perceptual engagement with the world (specifically, where and when we look around us as we locomote through a cluttered environment) is shaped and tuned so as to provide information in 'just-in-time' fashion so as to control a particular dynamical device with maximum efficiency. There's no planning, modelling, rehearsing, predicting - there's just carefully timed perception-action loops shaped by the dynamics of the task at hand. This is, in essence, what we think is going on all the time for basically everything.
This talk won't convince anyone to be radical anything if you aren't already; after all, it's still "merely" perception and action, not the juicy stuff like language. That's fine. But it's a nice example of all the pieces of this kind of research programme, plus I'm getting increasingly interested in Brett's work more generally anyway, so I thought I'd link to it here.
References
Fajen, B. R. (2013). Guiding locomotion in complex dynamic environments. Frontiers in Behavioral Neuroscience, 7:85.
Matthis, J. S., Barton, S. B., & Fajen, B. R. (2015). The biomechanics of walking shape the use of visual information during locomotion over complex terrain. Journal of Vision, 15(3), 10.
Matthis, J. S., & Fajen, B. R. (2014). Visual control of foot placement when walking over complex terrain. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 106-115.
Labels:
embodied cognition,
Fajen,
ICPA2016,
information,
locomotion
Tuesday, 19 April 2016
Brains learn to perceive, not predict, the world
The current trendy way to think about brains is as predictive machines. Bayesian methods, free energy, generative models, and all the other things in Andy Clark's new book are where it's at. Brains have to predict because, in Clark's terms, we are 'surfing uncertainty' - one step from disaster as we control our behaviour using sensory information we had to spend time processing and that is therefore a few hundred milliseconds out of date. Our brains build models of the world that constantly generate active predictions about what sensation will tell us next, and our behaviour holds together to the extent that the models do.
The ecological approach provides a different job description for the brain (Charles, Golonka & Wilson, 2014). We are embedded in environments rich in information that specifies a wide variety of behaviourally relevant properties. Some of that information is prospective, that is, it is information available now that is about something in the future. Two examples are the information variables you can use to catch a fly ball; both are states of the current time that, when used, allow you to achieve a behaviour in the future (specifically, to be in the right place in the right time to catch that ball). Another example is tau and the various other variables for time-to-collision.
This post reviews a paper (van der Meer, Svantesson & van der Weel, 2012) that measured visual evoked potentials in infants longitudinally at two ages, using stimuli that 'loomed' (i.e. looked like they were going to collide with the infants). The data show that the infant brains were not learning to predict the world. Instead, neural activity became more tightly coupled to information about the time-to-collision. We learn to perceive, not predict, the world.
The ecological approach provides a different job description for the brain (Charles, Golonka & Wilson, 2014). We are embedded in environments rich in information that specifies a wide variety of behaviourally relevant properties. Some of that information is prospective, that is, it is information available now that is about something in the future. Two examples are the information variables you can use to catch a fly ball; both are states of the current time that, when used, allow you to achieve a behaviour in the future (specifically, to be in the right place in the right time to catch that ball). Another example is tau and the various other variables for time-to-collision.
This post reviews a paper (van der Meer, Svantesson & van der Weel, 2012) that measured visual evoked potentials in infants longitudinally at two ages, using stimuli that 'loomed' (i.e. looked like they were going to collide with the infants). The data show that the infant brains were not learning to predict the world. Instead, neural activity became more tightly coupled to information about the time-to-collision. We learn to perceive, not predict, the world.
Thursday, 14 April 2016
Peer Review: Solemn Duty or Merely Extra Work?
The other day on Twitter I saw Rolf Zwaan tweeting about Collabra's policy of rewarding reviewers with credit that can be traded in for credit for organisational article processing fees (APC) budgets, into Collabra's fee waiver account, or for cash to the reviewer. This idea, of paying reviewers for reviewing, comes up now and again on Twitter and it always sounds wrong to me. My gut response is that we should not be paying individuals money to review articles.
So I did the normal thing and posted a Twitter poll, with the following results:
So I did the normal thing and posted a Twitter poll, with the following results:
Turns out I am in the minority! The various discussion I've had around this tweet have been kind of interesting too.Thesis: getting paid for peer review is a bad idea because it's part of our collective duty to improve science— Andrew & Sabrina (@PsychScientists) April 13, 2016
Labels:
good science,
peer review,
science
Saturday, 9 April 2016
The Art of the New Collaboration
I have two new collaborations on the go right now and these are finally working, so I thought I'd write down what I think I finally did right that got these moving. I'd love to hear your thoughts and ideas about how to start new collaborations; I suspect there are many scientists out there, young and old, who would be very interested in what we think works.
My main thought is this. A new collaboration is, by definition, a new project that brings together the skills and resources of at least two groups. These two groups will know their stuff, but not necessarily the others, and that means there's work to be done bringing people up to speed. In addition, each group will have existing interests and projects, and that means finding a way to implement some new, probably unfunded work using what you have to hand without interfering with the existing work.
My current solution that seems to be working: choose ONE simple, easy to implement project and work it through to completion.
Labels:
collaborations,
good science,
how to science
Wednesday, 16 March 2016
The High Price of Open Access
We've been chatting about open access journals, and how surprisingly expensive it is to publish in them. Obviously there are costs involved in publishing, but given it's all digital and a lot of the labour remains free, why is it so high?
Because we are clearly just floating along in the zeitgeist and completely, if subconsciously, attuned to the academic universe, it turns out this question arose the other day on Twitter. Andrew Kern started with this tweet and then continued to dig (full Storify of the tweets here).
PLOS created open access, and proved it could be done and make money. This is a remarkable achievement done in a pretty small amount of time. They are strong advocates for open access and this advocacy requires time, people and money to advertise, attend conferences, lobby and more. All of this is important work and I really do appreciate it - open access simply must be the future of scientific publishing.
I am, however, still not convinced that I should be paying for that activity via my article processing fees.
Because we are clearly just floating along in the zeitgeist and completely, if subconsciously, attuned to the academic universe, it turns out this question arose the other day on Twitter. Andrew Kern started with this tweet and then continued to dig (full Storify of the tweets here).
The answer, it turns out, is that the very large amount of money that PLOS makes goes into all kinds of surprising things; huge (but unfortunately normal) CEO salaries, investments in stocks etc, building up reserves, and investment in the company and it's infrastructure (in particular a new submission system). Michael Eisen then came back with some useful context (full Storify here) which addresses some of these issues; much of the investment in the company is around open access advocacy, etc).So I was staring at an invoice for page charges at PLoS Genetics ($2250) and wondered what could they be doing with all that money 1/40— Andrew Kern (@pastramimachine) March 15, 2016
PLOS created open access, and proved it could be done and make money. This is a remarkable achievement done in a pretty small amount of time. They are strong advocates for open access and this advocacy requires time, people and money to advertise, attend conferences, lobby and more. All of this is important work and I really do appreciate it - open access simply must be the future of scientific publishing.
I am, however, still not convinced that I should be paying for that activity via my article processing fees.
Labels:
open access,
PLOS,
science,
scientific publishing
Monday, 7 March 2016
Affordance-based control (Fajen 2005, 2007)
The most commonly studied tasks in the ecological approach involve the perceptual control of actions such as interception and steering. These models all involve perceiving some variable and moving so as to null the discrepancy between a current value and an ideal value. However, none of these approaches involve the perception of affordances; specifically, none of them address how people work to keep the required corrections possible, given their action capabilities. Fajen (2005, 2007) proposes affordance-based control, an ecological research framework that brings these questions to the fore and leads to the discovery of new, affordance based control strategies that account well for the data and solve the problems of simple information-based control models.
My current sense is that Fajen is absolutely correct in his assessment of the problems and has done sterling work developing an ecological solution. What follows is a brief description of the problems and his solutions; in the future I will blog some thoughts as I work to align my throwing affordance work with this framework.
My current sense is that Fajen is absolutely correct in his assessment of the problems and has done sterling work developing an ecological solution. What follows is a brief description of the problems and his solutions; in the future I will blog some thoughts as I work to align my throwing affordance work with this framework.
Sunday, 21 February 2016
How Worried Are You By The Symbol Grounding Problem?
Imagine you're a mental representation. You are a computational symbol system, and your job is to contain knowledge that is about the world and that can help your organism interact with that world (Newell, 1980). The 'aboutness' thing is the most important part of you - you are an intentional system, which means you have content that is meaningful.
So where did your content come from? (I'd like to know your thoughts, so please help by answering the questions at the end!)
So where did your content come from? (I'd like to know your thoughts, so please help by answering the questions at the end!)
Friday, 5 February 2016
On "The poverty of embodied cognition" (Goldinger et al, in press)
A new paper in Psychonomic Bulletin and Review (Goldinger, Papesh, Barnhart, Hansen & Hout, 2015) has taken a swing at the field of embodied cognition, claiming that it is vague, trivial and unable to add anything scientific to the investigation of cognition.
...our goal is to zoom out from specific empirical debates, asking instead what EC offers to cognitive science in general. To preview, we argue that EC is theoretically vacuous with respect to nearly all cognitive phenomena. EC proponents selectively focus on a subset of domains that work, while ignoring nearly all the bedrock findings that define cognitive science. We also argue that the principles of EC are often (1) co-opted from other sources, such as evolution; (2) vague, such that model building is not feasible; (3) trivially true, offering little new insight; and, occasionally, (4) nonsensical.My basic take is a) I actually agree with a lot of the criticisms in the context of the kinds of 'embodied' cognition we critique for similar reasons, but b) there is nothing new to any of these critiques, none of them are compulsory failings of the field and nothing about them makes embodiment an intrinsically empty notion.
Tuesday, 26 January 2016
Information is typically dense and continuous (A Purple Peril)
Optic flow is everywhere, all the time (same with other energy arrays, like the acoustic array). We depend on this fact deeply. When we are cast adrift from information, our behaviour quickly accumulates errors and strays in often disastrous ways. One example is the case of friction, which doesn't exist until two surfaces are in contact and therefore does not create information about itself that is available ahead of that contact. In another example, when cut off from landmarks, people walk in huge circles, getting seriously lost and confused; Souman, Frissen, Sreenivasa & Ernst, 2009). A simple version of this is the game of walking with your eyes closed; you quickly lose all confidence about where you are and what's happening and it's actually very difficult to make yourself walk at normal speed.
The Perilous proposal is that behaviour emerges in real time, as a function of the current flow of information, and that this flow is typically dense and continuous, not intermittent. I will illustrate this with an example of two designed sets of instructions for navigating though a building, where the dense information set leads to better, more stable behaviour.
Labels:
design,
information,
Purple Peril
Thursday, 21 January 2016
A Quick Review and Analysis of Perceptual Control Theory
Perceptual Control Theory (PCT; Powers, 1973) is a theory that proposes behaviour is about the control of perception. We act so as to keep some perceived part of the world at some state, and it's by doing this to sensible variables that we come to exhibit functional behaviour. People have noted the seeming overlap between PCT and the ecological approach, and it's advocates (mainly Richard Marken and Warren Mansell) all talk about it in revolutionary terms that should also feel a bit familiar.
I first encounted it in the context of an interview with Richard Marken on a now defunct blog (pdf of the archived page; link to page and scroll down to "Interview with Richard Marken"). Marken and I got into it a bit in the comments, as you will see! I was not impressed. However, Mansell & Marken (2015) have just published what they pitch as a clear exposition of what PCT actually is and how it works. I took the opportunity to read this and evaluate PCT as a 'grand theory of behaviour'.
My basic opinion has not changed. PCT is not wrong in most of it's basic claims, but it has no theory of information or how that information comes to be made or relate to the dynamics of the world. It's an unconstrained model fitting exercise, and it's central ideas simply don't serve as the kind of guide to discovery as a good theory should. Ecological psychology does a much more effective job of solving the relevant problems.
I first encounted it in the context of an interview with Richard Marken on a now defunct blog (pdf of the archived page; link to page and scroll down to "Interview with Richard Marken"). Marken and I got into it a bit in the comments, as you will see! I was not impressed. However, Mansell & Marken (2015) have just published what they pitch as a clear exposition of what PCT actually is and how it works. I took the opportunity to read this and evaluate PCT as a 'grand theory of behaviour'.
My basic opinion has not changed. PCT is not wrong in most of it's basic claims, but it has no theory of information or how that information comes to be made or relate to the dynamics of the world. It's an unconstrained model fitting exercise, and it's central ideas simply don't serve as the kind of guide to discovery as a good theory should. Ecological psychology does a much more effective job of solving the relevant problems.
Labels:
control,
coordination,
Mansell,
Marken,
models,
outfielder problem,
PCT,
Perceptual Control Theory
Wednesday, 6 January 2016
Tasks from the First Person Perspective (A Purple Peril)
The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the ‘psychologist’s fallacy’ par excellence.William James, The Principles of Psychology
This is a video of a baby trying bacon for the first time. The baby gets really really excited, and everyone around him goes 'Ha! Babies love bacon as much as the rest of us, this is great!'. And everyone laughs and cheers.
When people come into our labs to take part in experiments, we present them with a situation that we have designed to elicit a specific behaviour from them, and that we manipulate in various ways in order to probe the makeup of that behaviour. We therefore think we know what the person is doing: they are doing the thing we asked them to do. However, this isn't necessarily true, and in order to figure out what our participants did and why, we need to consider how they experienced the experiment. In effect, doing our science right means taking the first person perspective of our participants when we formulate our explanations.
I take this idea primarily from Louise Barrett's excellent book, Beyond the Brain: How Body & Environment Shape Animal and Human Minds (which I reviewed here). The first couple of chapters spend a lot of time talking about anthropomorphism, and why it's a problem. To be honest, when I read the book I didn't quite know why Louise started with this. But over time, I've realised what an extraordinarily powerful point it is and we now talk about all the time.
Peril Proposal: The psychologist's fallacy is real, but the ecological approach to understanding task dynamics and the information they create offers a useful framework for avoiding it while we science.
Labels:
coordination,
dynamics,
information,
mechanisms,
Purple Peril,
sports,
transfer of learning
Subscribe to:
Posts (Atom)