Add psychology to the list |
Sabrina and I have been working on this for, well, the entire blog. It has been a place for our "brave attempt to think out loud about theories of psychology until we get some" since day one; we've been identifying problems but, just as importantly, solutions the whole time. The theory post identified the big picture problem we see in psychology; time to lay out some solutions.
Step one is to present a map of the blog, organised thematically to guide new readers to work we've already done here. This should also help map out the gaps in the approach, so we can focus on things to do next; feel free to point us to problems we can't yet address! (And yes, we know about episodic memory and language - we're working on it.) This post is not a comprehensive summary of past work - it's a map for you to use to find what we've done so far.
To summarise: in essence, and some minor details aside, we are advocating for Chemero's (2009) radical embodied cognitive science, with the addition of some elements he was missing (network science & task specific devices). Cognition is embodied, extended and held together by the direct perception of affordances and events; the result is a complex, nonlinear dynamical system that must be analysed as such. The brain is not the sole source of our behaviour, nor is it representing the world; it clearly plays a critical role in this system, though, and we propose that we'll need the tools of network science to describe what it's actually up to (Sporns, 2010). Methodologically, we must carefully characterise the task, the resources available to solve the task (which include brain, body and environment) and the information these resources create which can sustain the formation and control of an embodied solution. This method is Bingham's (1988) task specific device approach (the main piece Chemero was missing, I think).This approach applies to all and any behaviour you want to explain, including the hard stuff like episodic memory and language.
Critically, this approach, while new (and uncommon in insisting on a role for Gibson's ecological approach) isn't just something we invented: all these elements are active parts of modern cognitive science. The only new part is bringing it all under one roof, with the goal of getting on and getting some decent normal science under our belts.
Here's what we've covered so far. If you want more details on any point, click on the links!
Step one is to present a map of the blog, organised thematically to guide new readers to work we've already done here. This should also help map out the gaps in the approach, so we can focus on things to do next; feel free to point us to problems we can't yet address! (And yes, we know about episodic memory and language - we're working on it.) This post is not a comprehensive summary of past work - it's a map for you to use to find what we've done so far.
To summarise: in essence, and some minor details aside, we are advocating for Chemero's (2009) radical embodied cognitive science, with the addition of some elements he was missing (network science & task specific devices). Cognition is embodied, extended and held together by the direct perception of affordances and events; the result is a complex, nonlinear dynamical system that must be analysed as such. The brain is not the sole source of our behaviour, nor is it representing the world; it clearly plays a critical role in this system, though, and we propose that we'll need the tools of network science to describe what it's actually up to (Sporns, 2010). Methodologically, we must carefully characterise the task, the resources available to solve the task (which include brain, body and environment) and the information these resources create which can sustain the formation and control of an embodied solution. This method is Bingham's (1988) task specific device approach (the main piece Chemero was missing, I think).This approach applies to all and any behaviour you want to explain, including the hard stuff like episodic memory and language.
Critically, this approach, while new (and uncommon in insisting on a role for Gibson's ecological approach) isn't just something we invented: all these elements are active parts of modern cognitive science. The only new part is bringing it all under one roof, with the goal of getting on and getting some decent normal science under our belts.
Here's what we've covered so far. If you want more details on any point, click on the links!
Cognition is embodied
The first claim we want to defend is that cognition is embodied. Embodied cognition is not the hypothesis that the contents of cognition can be affected a bit by our bodies (as implied in this study). Embodied cognition is actually the fairly radical hypothesis that the brain is not the sole resource we have available to us to solve problems. We perceive and act in very particular ways so as to generate information and solve problems non-computationally (for example, fielders catch fly balls by moving in such a way as to cancel out either the optical curvature or acceleration of the ball's motion, which happens to bring you to the right place at the right time). The bodies we move are built in very specific ways; our hands, for example, are built as if they are implementing certain computations that are required to control them. This 'morphological computation' isn't actually computation, it's more like the Watts governor (van Gelder, 1995). A great example of this idea in action is Big Dog, one of the many awesome robots built by Boston Dynamics.
Embodiment changes what 'cognition' will end up looking like. By changing the job description (e.g. what resources we have available to solve problems) we end up proposing entirely different solutions to tasks. An excellent recent book on this topic is Barrett (2011), Beyond the Brain: How body and environment shape animal and human minds. If you allow yourself bodies, behaviour and perception, then you typically don't end up needed complex computational solutions being implemented in the brain.
Cognition is extended
A logical extension to embodied cognition is the claim that cognition is extended (Clark & Chalmers, 1998). This is the claim that things in the environment literally form part of the cognitive process. This can be summarised in Clark & Chalmers' 'parity principle':
In order to solve a given task, then, we use a wide variety of resources; some of these are neural, but not all. Some of the resources are our bodies (our visual system is composed of mobile eyes in a mobile head on a mobile torso equipped with legs, for example), while some are objects and other people in our environments. A theory of psychology must therefore include all these resources.
The role of perception
Extended, embodied cognition requires impressive perception. Typically, perception is seen as the end point of a complex process, taking impoverished input and enriching it until it is good enough to be useful. Cognition then becomes a computational process of adding knowledge and structure to our experience. If, however, cognition is to be the solving of problems using resources distributed beyond the brain (as above) then this account isn't good enough.
We already have a theory of perception that is up to the task of providing the kind of access to the world that we need: James J Gibson's ecological approach to perception (Gibson, 1979; see the reading group posts on this book).Gibson's book begins with the environment; what is available to the perceiving organism that they might be interested in using. Starting there, rather than with the anatomy of the eye, led Gibson to propose his two key ideas: affordances, and information.
Affordances
Affordances are the opportunities for behaviour the world offers to a given organism; a handle affords grasping to a organism with a hand, for example. Technically, they are dispositions of the environment. Salt is disposed to dissolve in water, for example, but doesn't dissolve until placed in water. Affordances are dispositions supporting behaviour, but that behaviour doesn't show up until a matching organism comes by. This way of thinking of affordances makes them real properties of the world which persist in the absence of organisms (Chemero (2009) advocates treating affordances as relations (see here and here); I talked about this debate here, here and summarised it here. Long story short, I think Chemero is confusing affordances and information; the latter is relational and does every relational thing Chemero wants affordances to do, without the problems).
Some terminology (based on Turvey, 1992 & Turvey, Shaw, Reed & Mace, 1981): affordances are complex dispositional properties, composed of combinations of anchoring properties. These anchoring properties are things like the composition of surfaces - their size, shape, etc. Some organisms have complementary anchoring properties (e.g. a hand of the right size and shape) and can effect an affordance. Which properties of organisms matter is still a matter of debate: some people have proposed body scale as a key property (e.g. Warren, 1984) while other researchers feel we need something more like ability (Chemero, 2009) or effort (Proffitt, 2008). The latter is likely the right path, but is, as yet, poorly defined.
Information
Affordances are easy to define; the real question is whether there is perceptual information available to an organism for those affordances. The most detailed explanation of how affordances give rise to information is Turvey et al (1981), who lay out the concept of ecological laws to expand on Gibson's (1979) account (see this post, this post and this post on Chapter 5, and this post and this post on Chapter 6 for Gibson's description). These laws govern how the anchoring properties of affordances interact with energy such as light to create structure in light; this structure, by virtue of the law, is specific to the affordance. The laws are ecological in the sense that they have a limited scope: the law does not apply universally, but only in the kinds of niches we find ourselves. Within the scope of the law, however, ecological optics explains how affordances structure light to create information.
Because this information specifies the affordance (i.e. there is a 1:1 mapping between the optics and the world) if you detect the optical information, this is equivalent to perceiving the property of the world. Perception is therefore direct: unmediated by any internal states. As things currently stand, direct perception requires this law based, specification relationship. Whether specification is actually required is a topic of debate and we'll be getting into that soon.
The one thing that information needs to have done to it in order to be useful for the control of action is calibration. Information variables are unitless: optical information is all angular, for example, and in order to use information to act in space you must apply a metric to the measurement. This does not require internal states or processing; calibration arises from making your perceptual measurement with a ruler marked off in action relevant units (these could be body scale, or effort, etc; in other words, the units are the organism's complementary anchoring properties). The intuition here is simple; you can measure the same amount of space with a ruler marked up in, say, inches or centimetres - it's the same amount of space, but the numbers that come out are quite different. If you instead measured that space using, say, the length of your arm as a unit, you would have a number who's units tell you something directly about your ability to cross that space with your arm; this is useful for, say, reaching to grasp an object. You can directly measure all kinds of things if you have a measuring device marked up (i.e. calibrated) in the appropriate way: one example is the polar planimeter which directly measures area (Runeson, 1977).
Dynamical systems
The hypothesis that we have embodied, extended minds which rely on perception to establish the required couplings means that cognition is a complex, nonlinear dynamical system. Dynamics is the mathematical language of change over time, and provides just the right formal tools to model the kinds of systems we are. An excellent example of using dynamics to model a perception-action system is Bingham's model of coordinated rhythmic movement; this simple task is an excellent model and test bed for the ideas laid out so far.
Dynamical systems is not, in itself, a theory of behaviour; this is an error made by a lot of researchers and the data do not support them. It is, however, the right analytical tool for the job.
No mental representations
The big 'negative' thing we're going to insist on is that we're going to rule out computational mental representations as entities you can invoke in your explanations. The reason is fairly simple: there is no limit to what representations can do. Whenever you come across a problem in your explanation, say a potential lack of perceptual access to some required information, you can simply claim that the gap in your explanation is filled by a representation that has just the right size and shape. Because they can be anything you need them to do, they cease to have any explanatory power.
We've been critiquing representations since day one; Sabrina has summarised a lot of the issues here. Because representations aren't good explanations, and because when you embrace embodiment they tend to become unnecessary, we are strong advocates of the 'radical' hypothesis that we do not trade in mental representations.
Cognition does not have to be representational: the standard cry of 'what else could it be?' has been answered. van Gelder (1995) described a device, the Watts steam governor, that does a job that today would typically be solved using an algorithm and does it efficiently, reliably and stably in the face of noise and perturbations. The moral is simple; while there is an algorithm to describe the solution, it's not how the device actually works, nor would the device work well if it was. Another useful device for metaphors is the polar planimeter (Runeson, 1977); this device directly measures the 'higher order' variable area, without implementing the algorithm of 'two measurements of length combined via multiplication'. The most recent defence of a non-representational approach is Chemero's book, Radical Embodied Cognitive Science. So while you may not agree with the idea, a non-representational cognitive science is, at least, a viable option, and we believe one justified by taking embodiment seriously.
(I would love to see an inventory of all the representations that have ever been invoked in cognitive science; I would guess that it is a tangled and incoherent mess of things designed to just fix that one problem.)
What about the brain?
The first claim we want to defend is that cognition is embodied. Embodied cognition is not the hypothesis that the contents of cognition can be affected a bit by our bodies (as implied in this study). Embodied cognition is actually the fairly radical hypothesis that the brain is not the sole resource we have available to us to solve problems. We perceive and act in very particular ways so as to generate information and solve problems non-computationally (for example, fielders catch fly balls by moving in such a way as to cancel out either the optical curvature or acceleration of the ball's motion, which happens to bring you to the right place at the right time). The bodies we move are built in very specific ways; our hands, for example, are built as if they are implementing certain computations that are required to control them. This 'morphological computation' isn't actually computation, it's more like the Watts governor (van Gelder, 1995). A great example of this idea in action is Big Dog, one of the many awesome robots built by Boston Dynamics.
Embodiment changes what 'cognition' will end up looking like. By changing the job description (e.g. what resources we have available to solve problems) we end up proposing entirely different solutions to tasks. An excellent recent book on this topic is Barrett (2011), Beyond the Brain: How body and environment shape animal and human minds. If you allow yourself bodies, behaviour and perception, then you typically don't end up needed complex computational solutions being implemented in the brain.
Cognition is extended
A logical extension to embodied cognition is the claim that cognition is extended (Clark & Chalmers, 1998). This is the claim that things in the environment literally form part of the cognitive process. This can be summarised in Clark & Chalmers' 'parity principle':
If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.There is still debate about how well this idea works, mostly coming from Adams & Aizawa (2010). They believe that the hypothesis is grounded in a confusion between coupling and constitution; while we are, indeed, coupled to things in the world, they need not then constitute part of our cognition. We've had various arguments with Ken Aizawa about this (summarised here); I think the main problem with their argument is that there is no need for all the parts of a cognitive system to have 'the mark of the cognitive' if we're happy the system as a whole is cognitive. This works, I think, because of the nature of the coupling that goes on when we interact with the world: objects literally become part of us when we interact with them, and the kind of ongoing perception-action loops that support this run deep.
Clark & Chalmers, 1998, pg. 2
In order to solve a given task, then, we use a wide variety of resources; some of these are neural, but not all. Some of the resources are our bodies (our visual system is composed of mobile eyes in a mobile head on a mobile torso equipped with legs, for example), while some are objects and other people in our environments. A theory of psychology must therefore include all these resources.
The role of perception
Extended, embodied cognition requires impressive perception. Typically, perception is seen as the end point of a complex process, taking impoverished input and enriching it until it is good enough to be useful. Cognition then becomes a computational process of adding knowledge and structure to our experience. If, however, cognition is to be the solving of problems using resources distributed beyond the brain (as above) then this account isn't good enough.
We already have a theory of perception that is up to the task of providing the kind of access to the world that we need: James J Gibson's ecological approach to perception (Gibson, 1979; see the reading group posts on this book).Gibson's book begins with the environment; what is available to the perceiving organism that they might be interested in using. Starting there, rather than with the anatomy of the eye, led Gibson to propose his two key ideas: affordances, and information.
Affordances
Affordances are the opportunities for behaviour the world offers to a given organism; a handle affords grasping to a organism with a hand, for example. Technically, they are dispositions of the environment. Salt is disposed to dissolve in water, for example, but doesn't dissolve until placed in water. Affordances are dispositions supporting behaviour, but that behaviour doesn't show up until a matching organism comes by. This way of thinking of affordances makes them real properties of the world which persist in the absence of organisms (Chemero (2009) advocates treating affordances as relations (see here and here); I talked about this debate here, here and summarised it here. Long story short, I think Chemero is confusing affordances and information; the latter is relational and does every relational thing Chemero wants affordances to do, without the problems).
Some terminology (based on Turvey, 1992 & Turvey, Shaw, Reed & Mace, 1981): affordances are complex dispositional properties, composed of combinations of anchoring properties. These anchoring properties are things like the composition of surfaces - their size, shape, etc. Some organisms have complementary anchoring properties (e.g. a hand of the right size and shape) and can effect an affordance. Which properties of organisms matter is still a matter of debate: some people have proposed body scale as a key property (e.g. Warren, 1984) while other researchers feel we need something more like ability (Chemero, 2009) or effort (Proffitt, 2008). The latter is likely the right path, but is, as yet, poorly defined.
Information
Affordances are easy to define; the real question is whether there is perceptual information available to an organism for those affordances. The most detailed explanation of how affordances give rise to information is Turvey et al (1981), who lay out the concept of ecological laws to expand on Gibson's (1979) account (see this post, this post and this post on Chapter 5, and this post and this post on Chapter 6 for Gibson's description). These laws govern how the anchoring properties of affordances interact with energy such as light to create structure in light; this structure, by virtue of the law, is specific to the affordance. The laws are ecological in the sense that they have a limited scope: the law does not apply universally, but only in the kinds of niches we find ourselves. Within the scope of the law, however, ecological optics explains how affordances structure light to create information.
Because this information specifies the affordance (i.e. there is a 1:1 mapping between the optics and the world) if you detect the optical information, this is equivalent to perceiving the property of the world. Perception is therefore direct: unmediated by any internal states. As things currently stand, direct perception requires this law based, specification relationship. Whether specification is actually required is a topic of debate and we'll be getting into that soon.
The one thing that information needs to have done to it in order to be useful for the control of action is calibration. Information variables are unitless: optical information is all angular, for example, and in order to use information to act in space you must apply a metric to the measurement. This does not require internal states or processing; calibration arises from making your perceptual measurement with a ruler marked off in action relevant units (these could be body scale, or effort, etc; in other words, the units are the organism's complementary anchoring properties). The intuition here is simple; you can measure the same amount of space with a ruler marked up in, say, inches or centimetres - it's the same amount of space, but the numbers that come out are quite different. If you instead measured that space using, say, the length of your arm as a unit, you would have a number who's units tell you something directly about your ability to cross that space with your arm; this is useful for, say, reaching to grasp an object. You can directly measure all kinds of things if you have a measuring device marked up (i.e. calibrated) in the appropriate way: one example is the polar planimeter which directly measures area (Runeson, 1977).
Dynamical systems
The hypothesis that we have embodied, extended minds which rely on perception to establish the required couplings means that cognition is a complex, nonlinear dynamical system. Dynamics is the mathematical language of change over time, and provides just the right formal tools to model the kinds of systems we are. An excellent example of using dynamics to model a perception-action system is Bingham's model of coordinated rhythmic movement; this simple task is an excellent model and test bed for the ideas laid out so far.
Dynamical systems is not, in itself, a theory of behaviour; this is an error made by a lot of researchers and the data do not support them. It is, however, the right analytical tool for the job.
No mental representations
The big 'negative' thing we're going to insist on is that we're going to rule out computational mental representations as entities you can invoke in your explanations. The reason is fairly simple: there is no limit to what representations can do. Whenever you come across a problem in your explanation, say a potential lack of perceptual access to some required information, you can simply claim that the gap in your explanation is filled by a representation that has just the right size and shape. Because they can be anything you need them to do, they cease to have any explanatory power.
We've been critiquing representations since day one; Sabrina has summarised a lot of the issues here. Because representations aren't good explanations, and because when you embrace embodiment they tend to become unnecessary, we are strong advocates of the 'radical' hypothesis that we do not trade in mental representations.
Cognition does not have to be representational: the standard cry of 'what else could it be?' has been answered. van Gelder (1995) described a device, the Watts steam governor, that does a job that today would typically be solved using an algorithm and does it efficiently, reliably and stably in the face of noise and perturbations. The moral is simple; while there is an algorithm to describe the solution, it's not how the device actually works, nor would the device work well if it was. Another useful device for metaphors is the polar planimeter (Runeson, 1977); this device directly measures the 'higher order' variable area, without implementing the algorithm of 'two measurements of length combined via multiplication'. The most recent defence of a non-representational approach is Chemero's book, Radical Embodied Cognitive Science. So while you may not agree with the idea, a non-representational cognitive science is, at least, a viable option, and we believe one justified by taking embodiment seriously.
(I would love to see an inventory of all the representations that have ever been invoked in cognitive science; I would guess that it is a tangled and incoherent mess of things designed to just fix that one problem.)
What about the brain?
The brain is clearly important, it's just not representing anything; rather the brain is in a constant state of change in response to it's environment. I'm inclined right now to treat it as the fast responding resource which coordinates the assembly of task specific devices, as well as a system that can implement embodied solutions to computational problems.
The most promising approach to neuroscience I've seen recently is that described by Olaf Sporns' book Networks of the Brain (Sporns, 2010). In this book, Sporns describes how the mathematics of networks are being applied to neuroscience datasets to uncover structure extended over both space and time within the endless modelling and remodelling of neural connections. This seems to me to be the right toolset for neuroscience; combined with our radical, embodied cognitive science it could be a powerful approach, and we're waiting to hear about funding for a project to set this idea in motion.
Summary
There's a lot of work to do. But these are our core theoretical commitments - when we try to explain our data, we must characterise the task resources (which can include the brain, body and the environment) as well as the information supporting the coupling of these resources into task specific devices which solve the current task. The tools exist, and there are plenty of problems to study. Psychologists will need to get a little better at physics, biology, and maths, and we'll need help from experts in these fields. But I truly believe that taking this strong theoretical stance will allow psychology to apply itself to a coordinated programme of research that, right or wrong, will produce a wealth of data and drive our understanding for. After all, that's what a theory is for.
References
Adams, F., & Aizawa, K. (2010). The Bounds of Cognition. Wiley:Blackwell. Amazon.co.uk
Barrett, L. (2011). Beyond the Brain: How body and environment shape animal and human minds. New Jersey: Princeton University Press. Amazon.co.uk
Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Amazon.co.uk
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58 (1), 7-19 DOI: 10.1111/1467-8284.00096 Download
Gibson, J.J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Amazon.co.uk
Proffitt, D. R. (2008). An action-specific approach to spatial perception. In R. L. Klatzky, M. Behrmann, & B. MacWhinney (Eds.), Embodiment,ego-space, and action (pp. 179–202). Mahwah, NJ: Erlbaum.
Runeson, S. (1977). On the possibility of "smart" perceptual mechanisms Scandinavian Journal of Psychology, 18 (1), 172-179 DOI: 10.1111/j.1467-9450.1977.tb00274.x
Sporns, O. (2010) Networks of the Brain. Cambridge, MA: MIT Press. Amazon.co.uk
Turvey, M. (1992). Affordances and Prospective Control: An outline of the ontology. Ecological Psychology, 4 (3), 173-187 DOI: 10.1207/s15326969eco0403_3
Turvey, M., Shaw, R., Reed, E., & Mace, W. (1981). Ecological laws of perceiving and acting: In reply to Fodor and Pylyshyn (1981) Cognition, 9 (3), 237-304 DOI: 10.1016/0010-0277(81)90002-0
van Gelder, T (1995). What might cognition be, if not computation? The Journal of Philosophy, 92 (7), 345-381 Download
Warren, W. (1984). Perceiving affordances: Visual guidance of stair climbing. Journal of Experimental Psychology: Human Perception and Performance, 10 (5), 683-703 DOI: 10.1037/0096-1523.10.5.683 Download
Isn't the whole problem with the field of psychology that everyone takes a strong theoretical stance in advance of performing experiments? Exactly as the cartoon suggests...
ReplyDeleteThe summary is a little bit too jargony for me to feel I have understood it. If you are saying "we think this is the best explanation for the observed data to date" and "we have made some major testable predictions which we are now in the process of testing" then great stuff! I look forward to seeing some results. If not then what are you saying? And why?
I'm saying that we have been talking about each of these various elements for some time on the blog now, and that if you are interested in knowing more about these elements, you can read the things we've already written. This post isn't meant to be read as a comprehensive summary of those posts, but as a set of pointers, a map. Having a map for people to what we've done frees us up to keep moving forward on some of the details.
ReplyDeleteIsn't the whole problem with the field of psychology that everyone takes a strong theoretical stance in advance of performing experiments? Exactly as the cartoon suggests...
The cartoon just made me laugh because its how psychology has ended up with no central theory but 157 unrelated 'theories' from 152 different people*. And it's a reminder to us not to reinvent the wheel, but to integrate what exists.
I've only recently come across the blog, so this is particularly timely for me, thanks. I agree with most of what your saying (just a few caveats here and there), and really laud the endeavour.
ReplyDeleteOne question though, not about what you've included, but what you haven't. It's not terribly popular to raise the issue(s) of experience and meaning, but I wonder whether any genuine theory of psychology could hope to stand up without addressing them.
There are some resources within ecological psychology to address these issues to a degree (it might be possible to apply thinking about affordances and information, for instance), but because these more subjective aspects of mind are not a core part of the framework, is there a danger of them being left out?
The typical move in cognitive psychology has been to treat them as epiphenomenal, leave them as some output at the end of the assembly-line of perceptual processing that only arrives on the scene after all the real work is done. There are a number of counter-arguments to that view, but my favourite and I think the most striking is the placebo effect, where we see the meaning of the situation and a person's experience organising a whole slew of factors, affecting behaviour, biology and cognition.
As I mentioned, I think there are some resources available both within ecological psychology and also related approaches (I'm partial to the "enactive" view that Chemero deliberately leaves the door open to), but they don't make an appearance in your outline here.
Could you be a little more specific about what you mean by meaning and experience?
ReplyDeleteMeaning is built right into Gibson's theory. The whole point of direct perception is to have access to the meaning of the proximal stimuli (what's happening on the retina) without computation or representation. In the Turvey-Shaw-Mace (Turvey et al, 1981) approach, affordances are the meaningful world entities you want to know about; they lawfully interact with energy to produce information that is specific to them; you detect that information and thus perceive the affordance, and the access to meaning is underwritten by the fact that this was a law based process. I spent some time on this here.
I'm only a little familiar with the enactive stuff. I worry it's just some wheel redescription, but I need to look into it more.
"Could you be a little more specific about what you mean by meaning and experience?"
ReplyDeleteI wish! I'm orbitting the hoary issues of subjectivity and consciousness, without wanting to use either term, given their strait-jacketing in recent Cognitive Science.
I haven't had a chance to read or digest the various links above properly, but your discussion with Eric Charles about what I think is basically the meaningfulness of two scuffed baseballs is close enough to work as an example. It raises the issue that two similar (if not identical) objects can be very differently meaningful, carry different interaction potentialities or likelihoods. I don't see this as something that would necessarily undermine your framework, but something rather important to human psychology that is missing from it. We might explain all of the lawfulness that underlies and makes cognition possible, but be left with almost no predictive or explanatory power with regards to behaviour. I think the specific issue of identifying individuals, which is what the discussion on the other post is about, is an instance of this larger issue. There is a deeply important aspect of individuality and history which isn't quite explicated in your framework so far (that I can see, as I've said, still digesting...).
"The whole point of direct perception is to have access to the meaning of the proximal stimuli..."
It is parsing sentences like this that I'm concerned about, as I'm not at all clear what is meant by "meaning" in this kind of case, and why it would inhere somehow in the proximal stimulus. It would seem to me that meaning is something relational. The meaningfulness of a situation changes as I change (in many different ways - as I learn, as I get hungry or tired, as I move) but can also change as the situation changes. As such it's not in the proximal stimulus, nor in the distal one, but in the relationship between me, the skilled agent, and the environment/situation. The proximal stimulus is just one of many mediating factors here, with no real reason (save old intuitions about where the environment ends and where "I" begin) to privilege it over others. (Harry Heft has a nice paper in press discussing this in the context of Holt's "recession of the stimulus" idea).
Coming to terms with describing and explaining the dynamics of meaningful behaviour would seem to me to require a perspective on things that puts this relational character of action centre-stage (or at least acknowledge it!). It's one of the things that makes the idea of affordances as relations appealing to me. I took Chemero's work as offering some hope on that front, but given that you want to rule out affordances as relations I think you'll need something else that will do the same work. (Here again, I must hold up my hands and ask for patience and teaching, as I'm only getting into this debate...)
Marek,
ReplyDeleteHistorically there is much interest in both meaning and experience in the Ecological approach. Gibson was certainly on good terms with the prominent phenomenologists of his day (at least those in psychology), and was quite concerned with the issues you raise. Alas, they are hard issues to deal with. Probably the best major works dealing with these issues that have come out of Eco Psych are Ed Reed's books.
Does Heft have something in press I don't know about, or would you be referring to his chapter that recently came out here? I wouldn't want people to wait if they didn't have to ;- )
P.S. Talk of "proximate stimuli" will likely just confuse any of these issues. There is no proper "meaning of the proximate stimuli" to talk about, that's part of the problem with most traditional approaches, though I know what Andrew is trying to get at.
It is parsing sentences like this that I'm concerned about, as I'm not at all clear what is meant by "meaning" in this kind of case, and why it would inhere somehow in the proximal stimulus. It would seem to me that meaning is something relational. The meaningfulness of a situation changes as I change (in many different ways - as I learn, as I get hungry or tired, as I move) but can also change as the situation changes. As such it's not in the proximal stimulus, nor in the distal one, but in the relationship between me, the skilled agent, and the environment/situation.
ReplyDeleteI was a little imprecise, sorry.
The reason I mentioned the proximal stimulus is that all the organism has access to is information. You don't get to 'peer behind the curtain' and see where that information came from. So meaning has to be in that stimulus, in the information, otherwise the organism would have no access to it.
But of course meaning doesn't come from the information; it comes from the world. Information contains meaning because it is (lawfully) related to the meaningful property of the world, the affordance. The meaning here is about ability to act. Meaning is able to be 'transmitted' up this line (from world to information to organism) because the process is underwritten by a law (if you're Turvey et al) or because the process is embedded in a situation (if you're Chemero citing Barwise & Perry). I'm as yet unconvinced by the latter, but only 'as yet'. Chemero Chapter 6 is where this all comes up in more detail, which I think you've been reading.
I think a couple of points are important to make. First, you have to keep world (affordances) and information separate in your theorising; they are related but not identical and they do different work in the theory. Second, information is where you look for any difference in behaviour of an organism. If two 'nearly identical' objects produce very different behaviour, then they are not, as far as the organism is concerned, nearly identical. The first person perspective is critical.
A related note: Affordances are not relations; we enter into a relation to them when we perceive them. Every time you need a relation, it's information and perception, not affordances, that provide it.
Andrew -
ReplyDeleteSurprisingly (to me anyway), your use of "meaning" coincides with my view of linguistic "meaning"; viz, that a speaker's intent in issuing an utterance is to elicit a reaction by a hearer, and the meaning of the utterance is the intended reaction. (My impression is that this is not one of the conventional views of linguistic meaning, but I have recently encountered a couple of people implicitly expressing it.)
My parsing of your last comment is that an organism has access to the information content of a stimulus caused by an affordance, and that information conveys "meaning", a property of an affordance. It wasn't clear to me, but I'm guessing that the property - the "meaning" contained in the information - is the action required to actualize the affordance (or more accurately, information which individuates the action). Therefore, an affordance can be viewed as a "command to actualize" - a virtual speech event the meaning of which is the "intended" actualizing action. (So, an affordance exhibits intentionality! Who would have guessed?)
Conversely, presumably eco-psych folks view a speech event as an affordance - although as usual, the requirement of immediate actualization seems problematic because of the discrete vs continuous issue re speech that Sabrina and I addressed in the comments to her recent post. I can buy interpretation of speech as to some extent continuous, but meaning is a property only of discrete chunks - arguably sentences - which would seem to make this argument inapplicable.
Hi Charles,
ReplyDeleteI'm jumping here because you are touching on an issue I've been thinking about quite a bit lately: what is the process of meaning acquisition both for typical perceptual information and for language? Although in typical eco psych cases, the meaning of perceptual information is lawfully related to the world, organisms still have to spend some time learning this meaning (the affordance). One way that this might occur is via other things that the organism does or has happen to it while perceiving a particular information variable. E.g., an person that uses linear optical trajectory in the context of a fly ball will receive information from other sources like the position of his or her body and ultimately whether he or she succeeds in catching the ball. This learning process has to occur before linear optical trajectory means anything at all for that individual.
I see much the same kind of process happening for language. Speech events are a type of linguistic perceptual information. And, the process of learning about what certain speech events mean will depend on other perceptual information experienced at the same time (e.g., establishing joint reference on an object).
These two cases differ in that the meaning of LOT is lawfully related to the world while the meaning of speech events is not. There are lots of things you might predict on the basis of this difference in terms of learning rate and meaning stability. But, it seems to me that the same basic process would apply to both cases.
Sabrina -
ReplyDeleteSince in my conceptual model the input to the system is "downstream" - presumably somewhere in the brain - I don't distinguish sensory modes. So, I envision the procedure as being precisely the same, at least at the high level at which I have to think about it. Ie, just as visual network(s) learn by trial and error to respond to sensory input from the eyes (a compressed evolutionary process that results in new or modified neural "circuits"), so also aural network(s) learn by trial and error to respond to sensory input from the ears. And as you suggest, specific responses in either case depend on other inputs, ie, on context. From which it follows that the "meaning" of the input (ie, the response intended by the input's source) also depends on context.
For example, the meaning of the query "Does this make me look fat?" coming from someone stuffing towels into the front of a Santa suit and the meaning of the same query coming from a spouse trying on a pair of slacks are clearly quite different. Similarly, the "meaning" of the affordance offered by an approaching fly ball and that offered by an approaching bean ball are different. And in each case, it is critical that one learn to distinguish the context-dependent meanings and to respond accordingly.
I'm not sure what to think about the "lawful" issue raised in your last paragraph. On the one hand, it "feels" right, but on the other hand it's inconsistent with an attempt to unify all sensory response. I'm reminded of Davidson's anomalous monism, of which I'm skeptical (not the monism, of course, but that it's anomalous). I assume everything is "lawful", even if we have little hope of ever fully grasping some of the laws.
Hi Charles,
ReplyDeleteAbout the "lawful" bit - Andrew and I went over this very issue at length last night and we made enough headway that this will be the subject of a post very soon.
I totally agree that the brain does not represent or refer. I believe I see how it works, but it does not relate to any prior science that I am aware of, so how do I present my idea if not as a free-standing theory?
ReplyDeleteIt is that, as some believe, the sole purpose of the brain is to support movement, that is all that is needed for humans to think. The unconscious, trained muscle movements of the vocal system muscles produce very precise audio feedback, allowing more highly trained vocalizations. Word choice is unconscious, determined by the peer-pressure of one's culture, and not by a word's conscious meaning.
Additionally, since words do not represent in the brain, they also do not represent entities in the "real world", and thus do not possess volitional power, or intentionality. This invalidates the verb as performing actions. Instead, the subject, verb, and object are selected to culturally fit, with no regard to real world fit.
I would like to know how to present these ideas other than as a free-standing theory, and would appreciate any suggestions!
Thanks, donwilhelm3@yahoo.com
While I realize that a whole new theory can be perceived as though just an addition to the hundreds of other theories, the precise problem you have in coming up with one theory is that it too will, of course, be just another of many. In reading the posts here I see that the particular integration you are working at is primarily a cognitive and biological/ecological model. The theory that I posited in 2000 was an integration of psychoanalytic and systems theories that took humanistic psychology and cognitive theory into account. At this time in my life I would be unable to fully recognize all the theories from cognitive science (which is what I think you're trying to do), but what makes my theory truly unique is that it explains how intrapsychic dynamics and intrapsychic conflicts influence, and are balanced by, interpersonal relationships and the environment. No other psychological theory has done that. My theory does, however, assume some of the things you all are discussing in your posts (sort of like assuming that a falling tree does, indeed, make a noise, even if no one is there to hear it). That's a matter of levels of thought, I suppose. In fact, when I wrote an article about how my model could lend itself to a computer simulation of emotions, my program would have required many sensing devices to provide input as well as many modulating devices to adequately express the emotion formulated by the program. It seems most of the posts here include sensing and expressing as part of the theory of psychology, whereas my theory only covered the ongoing formulation of emotions based on need states and balance within a perceived environment. Since no one had clearly posited a theory that accounted for the way in which intrapsychic phenomena influenced interpersonal phenomena, my theory is especially useful. In fact, in the original 2000 text, I also showed how the theory specifically explained all the personality disorders in clear diagrammatic form that was specifically related to the schematic model of the theory. I have reviewed some of the theories offered above in the original post and the thing that appears to be clearly missing is the dynamic way in which a personality works. Each of the theories seems to suggest a number of packets that are addressed, but that don't really work together. A flow chart does not make an adequate theory because an adequate theory must be able to account for an almost infinite number of influences at once. Only a dynamic theory can so account, and does so by symbolically representing need states that cause tensions that must be resolved or maintained by some kind of balance, or break due to too much pressure. Anyway, I do believe that my theory is the one theory. Arrogant, I know, but if you knew me you'd know arrogance is the opposite of my style. I just managed to come up with a theory that integrated psychoanalytic and systems thinking because I knew the two must be compatible even though the two theoretical camps had always done as much as they possibly could to avoid each other. As I mentioned in response to your original post, there is a pared down version of my theory on my website. Of course, the original publication of the theory, one that does in fact site all influences, can be found in my book, "The Therapist's Use of Self in Family Therapy" (Aronson, 2000). The new and scaled down version can be found at the link below. I hope some of you will check it out and try to see if, as I am asserting, it really does unify psychology in a way that is truly helpful (and maybe even if, by extrapolation, it can account for the cognitive science aspects to psychology that have been outlined in most of the posts above).
ReplyDeleteThe Relational Systems Model by Dr. Dan Bochner
http://www.drbochner.com/articles_for_families/from_id_to_family_system_or_the_id_is_the_engine_in_the_great_life_machine
So a couple of you have suggested that you have theories ready to go for psychology. I've been thinking about these, and my problem is this: your theories don't connect in meaningful ways to previous work and other authors. Each suggested system stands alone, and it aims to explain things like consciousness, emotion, etc.
ReplyDeleteI think theories like this miss the point. They grant the reality of several problematic theoretical entities, they don't relate to other literature or disciplines such as biology, and frankly they're often filled with fringe concepts anyway.
I think a theory of psychology has to be rooted in evolution and biology. It has to talk about physics and the environment we inhabit. I think it has to begin with a careful reanalysis of what it is we're trying to explain, and I think if it can't talk to other work then it's part of the problem, and not a solution.
Gibson started over but didn't start isolated from all other work. I guess, come back in 30 years; if you've convinced anyone else to do empirical work on your theory and it's anything as successful as Gibson's, then we'll talk.
I guess, perhaps, the new schism in psychology is between the cognitive processing and understanding of proximal and distal stimuli on the one hand and the processing and understanding of relational dynamics on the other. My theory was an integration of psychoanalytic, systems and humanistic works - not a bunch of "fringe concepts" as you put it. But it is true that the relational concepts used by us psychotherapists have never very usefully leant themselves to empirical proof. I'm afraid that the most useful concepts for psychotherapists continue to be less empirically proven. Even the cognitive models of psychotherapy work on how one thinks about relationships and one's place among others. Forgive me if I am underestimating the strength of the relational parts of the models you have been discussing. To me, meaning is in relation to others and the environment. Maybe because I make that assumption, I am destined to a limited understanding of cognitive/ecological/biological models. I do think psychodynamic and systems models must be fully considered if any unified theory will be useful (psychoanalytic models go back almost 100 years and systems models more than 50 years, right?). Cognitive science is the new stuff. Maybe it has something to learn from the old.
DeleteMaybe, if psychologists were to start to have a real education in hard science (physics, mathematics, electronics, chemistry and thermodynamics), they might approach psychology more from an experimental point of view and try to form a good working model for psychotherapy from observations.
ReplyDeleteA model able to explain why people feel bad, depressed and how to help them feel much better in a short time.
Hypnosis and EMDR combined can be a real efficient help for depressed people... And why?
because accumulation of negative feelings are what causes unwell-being for people...
And anything done frequently and with enough feeling or sensation can become a personality trait. even symptoms.