Pages

Friday, 4 December 2015

Oh crap. Re-thinking van Gelder (A purple peril)

I have this problem where I like pretty much everything William Bechtel writes except when it pertains to cognitive science. It's annoying because, even when I disagree with him, I think he's worth taking seriously. This was on my mind when I started reading is 1998 paper, "Representations and cognitive explanations: Assessing the dynamicist's challenge in cognitive science."


In this paper, Bechtel critiques van Gelder's construal of the Watt's governor as an exemplar of how cognition could function without representations. I have always liked van Gelder's argument and wrote about it years ago in very favorable terms. I wouldn't say that I don't like it anymore, but Bechtel may have convinced me that one of van Gelder's central claims - that representations don't feature in explanations of the Watt governor - is incorrect. Does this mean that I now think that cognition requires internal representations? No! But, the language Bechtel uses to assess van Gelder's arguments sounded eerily familiar and I realized that the criteria used to label something as a representation are met by...ecological information.

Wait, what? Did I just say that? Yes, let me explain why.

Bechtel summarizes the basic function of representations as follows:
First, just as in the case of the Watt governor, we need to appeal to such representations to understand how basic cognitive systems, such as the visual system, coordinate their behaviors with their environments (p 306).

Andrew and I have spent the last 5 years on this blog arguing that ecological information performs this function for perceiving-acting organisms. This means that ecological information solves the major problem that representations were meant to solve for cognition. But this doesn't mean that ecological information does this by being a type of representation (and even if it is a representation, it certainly isn't an internal cognitive representation because ecological information is a real thing out in the world). 

But, listen to this. This is how Bechtel, referencing Newell, defines a representation:
"Designation: An entity X designates an entity Y relative to a process P, if, when P takes X as input, its behavior depends on Y.
There are two keys to this definition: First, the concept is grounded in the behavior of a process. Thus, the implications of designation will depend on the nature of this process. Second, there is action at a distance . . . This is the symbolic aspect, that having X (the symbol) is tantamount to having Y (the thing designated) for the purposes of process P (Newell, 1980, p. 156)."
Furthermore, Bechtel goes on to cite van Gelder's own minimal construal of a representation:
"Any “reasonable characterization” of representation, he says, will be “based around a core idea of some state of a system which, by virtue of some general representational scheme, stands in for some further state of affairs, thereby enabling the system to behave appropriately with respect to that state of affairs” (van Gelder, 1995, p. 35 1)." 
Ecological information specifies dynamic events in the world without being identical to those events (kinematic specification of dynamics). This satisfies Newell's definition. 

This specification enables organisms to coordinate their behavior, as if to the events in the world themselves. Functional behavior wouldn't follow from using ecological information without the "representational scheme" linking information to the world. This satisfies van Gelder.

This is wonderfully subversive. If this argument holds up (a big 'if' as this idea just occurred to me today), it would suggest that representations are central to cognition, but that these representations are completely external to perceiving-acting organisms. 

What's fun about this idea is that it slots in nicely with some very old school arguments for why cognition must be representational. 

For example, here is the classical breakdown of the three levels of a computational cognitive system (taken from Pylyshyn, 1989)
  1. Semantics: the level at which we explain why people do things by appealing to their goals and intentions
  2. Symbols: semantic knowledge is encoded in symbolic expressions. The expressions have parts which also have semantic content. This is the level at which symbolic expressions are manipulated
  3. Biology: For people, all of this must be implemented in a biological system
If we assume that ecological information represents the world, then we have the following three levels, which (mostly) parallel the ones above:
  1. World: the level at which we characterize whether behavior is functional by appealing to goals and intention defined with respect to objects and events in the world
  2. Information: objects and events in the world create patterns in energy distributions that stand in for these objects and events. Information is specific to, but not identical to the objects and events.  Multiple properties in a single event (a person lifting a light box and pretending it's heavy) can create independent kinematic patterns, uniquely specifying both properties (that the person is lifting a light box and that the person is trying to deceive you; Runeson & Frykholm, 1983)
  3. Biology: patterns in energy distributions cause changes to nervous system activity that has functional consequences on behavior in virtue of the lawful connection between information and the world
Pylyshyn thinks we need cognition to be a variety of computation in order to connect psychological activity with the physical world:
“If the knowledge-level description is correct, then we have to explain how it is possible for a physical system, like a human being, to behave in ways that correspond to the knowledge-level principles while at the same time being governed by physical laws…At present there is only one candidate explanation for how knowledge-level principles can be causally realized, and that is the one that builds on the set of ideas going back to the insights of Boole, Hilbert, Turing, Frege, and other logicians. It says that knowledge is encoded by a system of symbolic codes, which themselves are physically realized, and that it is the physical properties of the codes that cause the behaviors in question” p 61
The idea of ecological information acting as a type of external representation also slots nicely into this idea. Working through the quote above, ecological information can explain how a physical system can behave functionally (i.e., at the knowledge level) and the way it does so is consistent with physical laws. Properties of the world (including exciting social properties) create patterns in energy distributions - this serves the function of encoding knowledge in symbolic codes that are physically realized. The physical realization of information (structure in energy) causes changes in nervous system activity, which consequently affects behavior.

To be honest, I have no idea what to make of this argument at the moment. I'm not jumping off the wagon and saying that cognition is computation. But, I do think that ecological information meets some minimal definition of a representation. And, this opens up the possibility that ecological information might fit into discussions about what general problems representations are meant to solve. For some of the early work, a key problem representations were addressing was how physical systems like people could behave functionally with respect to the world. 

Where the classical account really collapsed was in trying to explain how the symbols (which it's now more fashionable to refer to as representations) get their content. I'm not going to get into this massive debate at the moment, but suffice it to say, this hasn't been an easy problem, and recent attempts to ground the symbols in perception haven't fully addressed it. This problem disappears if representations are external and based on lawful interactions between energy distributions and physical objects and events. Ecological information is meaningful because it is specific to particular physical activities in the world. 

Another major challenge for the classical account is mapping representations onto neural activity. Everyone assumes that representations are instantiated in the brain, but there is no clear way to identify representations with particular neural activity (see this post for more discussion of this idea). Finding the neural correlates of something isn't the same as knowing the physical makeup and extent of a particular representation. Construing ecological information as a form of external representation solves this problem. Information variables can be precisely defined and identified with specific structures in energy. We can independently verify whether an information variable is present. And, we can independently verify how that information variable corresponds to the world. 

All of this said, proponents of the classical account wouldn't be happy with the idea that external representations in the form of ecological information could support our behavior. The main reason is that all the criticisms of ecological information from Fodor and Pylyshyn (1981) still apply. I think those arguments can be refuted, but this is for another day!

Fair warning, I might be back tomorrow to denounce all of this...


Bechtel, W. (1998). Representations and cognitive explanations: Assessing the dynamicist's challenge in cognitive science. Cognitive Science22(3), 295-318. Download

Fodor, J. A., & Pylyshyn, Z. W. (1981). How direct is visual perception?: Some reflections on Gibson's “ecological approach”. Cognition9(2), 139-196.

Runeson, S., & Frykholm, G. (1983). Kinematic specification of dynamics as an informational basis for person-and-action perception: expectation, gender recognition, and deceptive intention. Journal of Experimental Psychology: General112(4), 585.

Van Gelder, T. (1995). What might cognition be, if not computation?. The Journal of Philosophy, 345-381.

10 comments:

  1. Sabrina, I cannot start explaining how pleased I was to read this post. Together with the previous one, it looks like it could be a very significant step forward.
    At the very least, it managed to pull me out of lurking mode: believe it or not, you and Andrew have been my window into RE for quite some time now. I find your work refreshing, especially because you consistently refuse to stop thinking and start following an already established route instead. More than that, I appreciate the efforts you make in trying to make your reasoning known and understood. That's true even if, or maybe because, I do frequently disagree with you, especially about representations.

    This comment is divided in two parts because of comment-length limits, I have to apologise for my verbosity!
    It seems to me that you may be getting very close to how I understand representations, this would eliminate my main worries about your position, so I think it's worth trying to make sure I understand what you are thinking (I may be badly wrong).
    Before symbolically(!) uncorking the bubbly, I'll have to ask a question, followed with a long explanation.

    PART 1:
    What exactly do you mean with "representations are central to cognition, but [...] these representations are completely external to perceiving-acting organisms"? Specifically, what do you mean with "completely external"?

    To understand why I'm asking I will need to spell out how I understand the minimal definition of representation; an
    understanding that I think follows Bechtel and van Gelder's closely. If my mapping won't make your skin crawl, I would be very happy, but please don't refrain from expressing any disagreement!

    So, minimal definition of representation. I'll follow a Shannon-like story, please hold your breadth and bare with me. Say we have an old-fashioned analogue intercom: a mic on one side, a cable, a speaker on the other side. You talk in the mic, this generates voltage modulations that travel on the wire and reproduce sound waves via the speaker. These (output) sound waves are not the same as the ones you produced, but they are similar enough for me to understand what you say. [It may be important to note that what you say is potentially important to me (ecologically significant?).] One way to describe the voltage modulations in the wire is that they carry the (Shannon) information needed to reproduce the sound waves with "good enough" fidelity - very few would deny that a signal is transmitted across the wire, after all.

    This information only becomes semantic/meaningful when I hear it. The crucial step is that because of the physical structure of mic and wire, there is lawful correspondence between the sound waves you've produced and many (not all) characteristics of the voltage modulations in the wire. Thus, following Bechtel, process P is the functionality of the intercom (reproducing your voice), entity X corresponds to the voltage changes in the wire, entity Y is the sound you've produced. In van Gelder's terms, the "general representational scheme" is provided by the lawful (strictly physical/mechanical) ways in which sound waves are transformed into electrical signals. Consequently, relative to P (intercom functionality), the voltage modulations are a representation of your voice.

    This looks straightforward to me. Are you still with me?

    ReplyDelete
    Replies
    1. Yes, I'm still with you and I agree with how you break down this analysis. It also nicely parallels the typical cognitive construal of a representation because the representational bit (the voltage modulations) are internal to the system and are the proximate cause of some observable output. I'm putting this comment here before reading what you say next to preserve my intellectual honesty!

      Delete
  2. PART 2 (continues previous message):
    If you are still with me, and we are considering the intercom as our subject of study: how can we say that the representation is "completely external" to the intercom? This is where I lose you...
    One way to understand the "completely external" side is to consider the speaker our subject, and the process "just" the production of new sound waves. However, this would make the representation just *mostly* external to the speaker so I don't think it explains your position.

    You may ask why would we want to consider the intercom as our subject of study. We may, because what happens in it is entirely analogous to what happens between ears and brain: thus, we (well OK, "I") have to conclude that we (humans) contain representations of the sounds we "hear".

    I've tried to continue by producing a longer disquisition of the role of me, the listener, in the scenario above, but I don't think I can do it effectively here, especially considering the length limits, but also because the idea is still solidifying in my mind. The discussion would be about the role of an interpreter, and how even interpreters can function mechanically. The following step would be to see that what we perceive as our inner life only concerns what happens after a first (mechanical) interpretation, and that this is the reason why the idea that cognition relies on representations seems so intuitive.
    My aim would to expose how misleading is our intuitive grasp of what representations are, along with why the intuition is so compelling. But before I can do this, I would like you to better understand where you stand.

    The scenario I've described may help, because it allows to answer via intermediate steps: where is the ecological information? Does the voltage modulation count as a representation? If it does, does it reside in the wire? Overall, I'll settle with agreeing that in the case of the auditory system the mechanism that produces the representations (effectively specifies the "general representational scheme") is outside the brain (the cochlea), or am I completely off the mark (quite possible!)?

    Now, I agree with you both when you say that representations intended in the classical way (a system to fill-in the gaps in perceptions) are not the correct way to start thinking about cognition, I'm asking these questions because I agree with this! where I think may disagree is what counts as a representation.

    Sorry for the length!

    ReplyDelete
  3. I see where you're going here. Yes, my move was simply to move the subject, in a sense. I did this purely because it struck me that ecological information met the basic definition of a representation so I wanted to follow this idea to see where it would lead.
    Going back to your example, the bit I was calling a representation was the unacknowledged pattern in sound waves that link the movements of the speaker's vocal tract to the microphone. If these patterns are specific to, but not identical to those movements, then they represent those movements. Now, I think this is true to ecological information whether or not it helps us understand or explain anything.

    ReplyDelete
  4. You asked about the quote: "it would suggest that representations are central to cognition, but that these representations are completely external to perceiving-acting organisms"
    For the first bit (representations being essential to cognition) I meant that we need some way to make "psychological contact" with distal objects and events in the environment. The basic definition of representation, I think, does a nice job of capturing how this contact is possible. The classical account is that these representations are in the head and are built up from inadequate sensory stimulation (e.g., Gregory, 1968). This is the primary thing that I object to in the classical view because I think Gibson does a good job of demonstrating that ecological information provides rich, accurate, and real-time information about the environment.
    In the next part of the quote, I claim that if ecological information is a type of representation, then it's "completely external" to the system. I mean this in the sense that the patterns in energy distributions that define ecological information are external to the organism. This is not to say that these external representations don't then cause changes internal to the system. They clearly do. This is where my idea meets up with your intercom example. The question is, do I want to call the stuff that happens in our brains as a result of contact with ecological information a representation?

    ReplyDelete
  5. This is tricky, and not just because I've become accustomed to avoiding the word representation! Let's say for now that I would call the resulting neural activity a representation (I'm not saying I'll stick with this, but let's see where it goes...). First of all, as you mention, these representations are already different from the classical view, because they would not need to be built up from inferences based on inadequate sensory input. They would come into the system already specifying meaningful aspects of the environment. So, this would already dramatically change our job description for the brain - it is not so much of an inference or pattern completion machine, it's more a coordinating machine.
    Ecological information works because there is higher order structure in energy distributions that is invariant over many types of transformation. This means that there is something stable, for perceiving/acting systems to work with. Without this stability, we also wouldn't want to call ecological information a representation, because it would fail to reliably connect distal systems.
    Have to pause now...I'll pick this up later!

    ReplyDelete
  6. (interim comment, it's still me but I'm not sure the system recognises my wordpress identity)
    Yes! So far I'm 100% onboard, but mind you, this does surprise me. ;-)
    This is also already useful for me, as it will help me write down the more ambitious discussion I was unable to complete the first time round.

    ReplyDelete
  7. Sabrina,
    thanks again. The reason why this has been useful is that you've clarified what I suspected but wasn't entirely sure (I'm still not sure TBH): one thing is being anti-representationalist in the sense of negating that the main business of brains is "filling in the gaps" (as in "[?perception is?] built up from inferences based on inadequate sensory input"), quite another is negating that incoming sensory signals that travel towards the brain (and within) can't be called "representations" of the original stimulus. If I understand you correctly, it's the "inadequate input" assumption that should be rejected, right? If so, we agree in full (so far).

    For me, this is a paramount distinction because it helps doing the all-important following steps:
    1. Brains don't build representations, they receive them!
    2. If there is some truth in the predictive/Bayesian brain idea, then it follows that there is also a "creative" side to perception, where some kind of filling the gaps does occur, but happens concurrently with the supposedly passive business of perceiving.
    Thus, there is no incompatibility between the various ideas. The reason why the classic view failed isn't that it is fundamentally wrong: inference has to be part of the picture, but still, what the classic view doesn't expose is the fact that perception is inevitably tightly attuned to capturing information which is ecologically significant and already available out there, in the real world. This is why so many tasks which seem cognitively very expensive can on the contrary be solved by simple strategies: in looking for this kind of strategies, nothing helps more than the embodied perspective.

    If you are still with me, you may note that the outlook I'm sketching allows to bring together a number of seemingly alternative approaches: classic cognitive science (after rejecting the the "inadequate input" assumption), Radical Embodiment, Bayesian brain and predictive coding, not to mention Interface Theory of perception. You will excuse me for feeling embarrassingly excited: the lack of unifying perspectives in neuro/psycho sciences is the one thing that always troubled me, and I think I'm catching a glimpse of how to break the deadlock...

    There is much more to be said (assuming you are still with me), but I guess it won't be easy to do it here given the space limitations. Anyway, I'm very much looking forward to hearing your reactions.

    ReplyDelete
  8. Okay, I haven't forgotten about this debate. In fact, I found it so interesting, I wrote an 8000 word theoretical paper about it. I hope to be able to talk about it on the blog in the coming months!!

    ReplyDelete
  9. Sabrina,
    I haven't caught your paper, is it too early to send a pointer?
    As for discussing here, I was convinced I'd scared you off. Apart from my (not unreasonable, I think) fears, life is catching up on me so in the mean time I've had precious little time to explore these topics myself. If we do want to discuss, I am very glad of doing it slowly and without any pressure.

    ReplyDelete