Pages

Sunday, 21 February 2016

How Worried Are You By The Symbol Grounding Problem?

Imagine you're a mental representation. You are a computational symbol system, and your job is to contain knowledge that is about the world and that can help your organism interact with that world (Newell, 1980). The 'aboutness' thing is the most important part of you - you are an intentional system, which means you have content that is meaningful.

So where did your content come from? (I'd like to know your thoughts, so please help by answering the questions at the end!)

This question is the issue of symbol grounding, first posed as such by Searle (1980), talked about seriously by Harnad (1990) and made famous with the Chinese Room thought experiment:



The problem is that you can have a system that deals in nothing but syntax (the form and structure of a communication transaction) but that will pass the Turing Test, i.e. look like it trades in semantics (meaning), even though that syntax is definitely not grounded in a real semantics.

There is currently no solution to the problem of endowing a mental representation symbol system with content/meaning/intentionality that doesn't involve that meaning to have come from somewhere else. If the meaning is not intrinsic to the system's form (Bickhard, 2009, calls this being 'internally related') then the mean has to come from something else, but then how did that thing get its meaning, and so on....it quickly becomes turtles all the way down. This means that mental representations cannot do the things they need to do in order to play the role they need to play in our cognitive economy to make us functional, intentional beings and not philosophical zombies.

This has always struck me as an absolute disaster for the standard cognitive approach. But my question here is, do other people worry about this?

I would love it if people would comment below and answer the following questions:
  1. What flavour of cognitive scientist are you? (psychologist, philosopher, enactivist, representationalist, Jerry Fodor in the actual flesh, etc)
  2. Do you know about the symbol grounding problem?
Then, if you do,
  1. Are you concerned by the implications of the symbol grounding problem for mental representations?
  2. Do you think the problem has already been solved? If so, how?
Obviously I have opinions, but this time I am very much interested in yours!

References
Bickhard, M. H. (2009). The interactivist model. Synthese, 166(3), 547-591.

Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335-346.

Newell, A. (1980). Physical symbol systems*. Cognitive science, 4(2), 135-183.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.

15 comments:

  1. 1) Part philosopher, part cognitive scientist.
    2) I've taught the symbol grounding problem in both philosophical and cognitive science contexts.
    3) I'm very concerned about the problem as a philosophical one, and have sort of published on it.
    4) As a cognitive scientist I'm yet to see the paradigm shift that van Gelder and Brooks promised us. And we can't do cognitive science aparadigmatically - leads to replication crises :). So for now it would seem that the mind is a computer, symbol grounding aside.

    ReplyDelete
  2. 1. Social psychologist, swarm intelligence guy

    2. The question assumes there is something to be known, as far as I'm concerned it is not a problem.

    3. I am concerned that cognitive science has gained prestige while assuming solipsism and a belief that brains equal minds, and create worlds -- oh but not really, the world is also real, coincidentally but not provably, as if it were a great leap of faith to believe in visible reality.

    4. I think it is a problem that is created by fundamentally misunderstanding the nature of thought and language, in the absurd belief that we are hermetically sealed, isolated information processors. Language does two things: a.it allows manipulation of symbols, and b.it allows people to communicate with one another. The "solution" is in understanding that these are really one thing. Thinking is a social act. Minds exist and function in linkage with other minds, and "meaning" is just agreement between minds.

    ReplyDelete
  3. (1) I'm a mathematician and computer scientist (retired). I am not a representationalist, but I am also not anti-representationalist.

    I see Gibson as roughly on the right track, but I don't completely agree with him.

    (2) I am very familiar with the symbol grounding problem.

    (3) No, I am not at all concerned.

    The symbol grounding problem shows that philosophers have got it wrong. But that should be already obvious without the symbol grounding problem.

    (4) Yes, the problem is "solved", but it seems that the solution is invisible to most folk.

    Of course, as a mathematician, I can manage quite well with ungrounded symbols, because mathematics is abstract.

    There is no symbol grounding problem. We do not ground symbols. Rather, we symbolize the ground. Philosophy has it backwards. The philosophers want to start with logic (which is ungrounded) and somehow ground its symbols. It is hard to see how that could ever work.

    The scientists know better. They start with the ground, and find useful ways to symbolize it. That gives them grounded symbols to work with.

    ReplyDelete
    Replies
    1. Interesting take but doesn't this depend on what you think your models should be doing? If all you want is a set of abstract descriptions that account for behaviour, then fair enough, but I don't think that's Searle or Harnad's target. They were going after people who have the grander ambition ambition of reproducing mind in-silico, using logic and Turing indistinguishability, and not just modeling its functions. I agree that symbol grounding is not really a problem for developing explanatory models.

      Delete
  4. 1. I'm a radical, whatever that means.
    2. Yup
    3. Nope
    4. It's a non problem. Can't we say it is inherently dualist? After all, if mind is meaning, and world is not, then we have the old subject-object distinction and Kant's noumenenal and phenomenal worlds. So rather a monist view would say either that "all is meaning" or "none is meaning". Either way, there's no more need to talk about where the meaning comes from. It either is or is not. I guess if one were to look at Gibson's theory of information, it would fall into the "all is meaning" camp. I'm just thinking out loud here. It's funny to put it like that without implying that Gibson = Berkeley, but I guess its because they differ in that Gibson took the view of meaning in the world, not just in ideas. I guess, also, that this is how the enactivists can provide a stronger sense of what "meaning" is in the Gibsonian sense when they talk of "sense-making".

    ReplyDelete
    Replies
    1. Here here. There is no problem for the Enactivist/ Embodied Cognition(er)

      Delete
  5. I'am involved in AI, I'm some kind of "structural representationalist".

    Yes

    I think so.

    In my opinion the problem has already been solved by Devis Pantano whit his theory of cognition (sorry but his monography is still a draft in italian).
    The proposed solution is: The brain use many perceptual simbols to construct abstract representations of the external world. The function of these symbols is to explicit the abstract information that allow to find regularity on sensory input


    ReplyDelete
    Replies
    1. Is this different from Barsalou's perceptual symbol systems?

      Delete
  6. 1.I'm a retired software developer with a life-long interest in artificial intelligence and language.

    2. This is the first time I've seen it posed in these terms, however it's related to one of the standard problems in philosophy, that is, whether (some amount of) knowledge is innate or whether we gain all knowledge from interaction with the environment (blank slate).

    3. Clearly the turtles have to stop somewhere.

    4. I think there's a partial solution in the Natural Semantic Metalanguage (see https://www.griffith.edu.au/humanities-languages/school-languages-linguistics/research/natural-semantic-metalanguage-homepage ).

    This posits a list of 65 semantic primes which cannot be decomposed in simpler terms (note the word "simpler") and which appear to exist with exactly the same meanings in all languages studied.

    The reason I say partial is that I suspect there may be primitive (that is, built-in) bases in other systems: visual, auditory, etc.

    ReplyDelete
  7. (1) I'm an accountant.

    (2) I know the general problem of grounding semantics, and some traditional (unworkable) solutions.

    (3) I'm not concerned about it for mental representations, but I think it is a real problem for the philosophy of language and for epistemology.

    (4) I don't think it has been solved, partly because it can't be for mental representations (because of problems with the way we conceive of a mental representation); and because the route to a solution is not very comforting.

    Our ability to talk doesn't need grounding, in one sense, since "I can't talk to you" must be false or not a statement - it's a kind of Moorean paradox.

    We ground what we say in successful linguistic experiments. When we find that certain 'moves' are unintelligible, we give up on them.

    We can't have a general theory of what would count as an intelligible move, however, as this would generate an open question paradox (ask whether the theory itself is intelligible ...).

    In the catastrophic circumstances of a complete collapse of intelligibility, we no longer have a language in which we can articulate our predicament - so we can't say, for instance, whether the incoherence is logical, empirical, or cognitive (to pick from a long list of possibilities ...).

    Semantic content is 'grounded' in our successful experiments and the incoherence of questioning whether we can talk.

    Mental representations, for a theorist, must be articulated mental representations...

    ReplyDelete
  8. (1) I'm an accountant.

    (2) I know the general problem of grounding semantics, and some traditional (unworkable) solutions.

    (3) I'm not concerned about it for mental representations, but I think it is a real problem for the philosophy of language and for epistemology.

    (4) I don't think it has been solved, partly because it can't be for mental representations (because of problems with the way we conceive of a mental representation); and because the route to a solution is not very comforting.

    Our ability to talk doesn't need grounding, in one sense, since "I can't talk to you" must be false or not a statement - it's a kind of Moorean paradox.

    We ground what we say in successful linguistic experiments. When we find that certain 'moves' are unintelligible, we give up on them.

    We can't have a general theory of what would count as an intelligible move, however, as this would generate an open question paradox (ask whether the theory itself is intelligible ...).

    In the catastrophic circumstances of a complete collapse of intelligibility, we no longer have a language in which we can articulate our predicament - so we can't say, for instance, whether the incoherence is logical, empirical, or cognitive (to pick from a long list of possibilities ...).

    Semantic content is 'grounded' in our successful experiments and the incoherence of questioning whether we can talk.

    Mental representations, for a theorist, must be articulated mental representations...

    ReplyDelete
  9. 1. psychology

    2. I do now.

    3. Not really. I agree with JimK. I think all meaning and thought is part of the function of communication, and all knowledge is a social activity.

    My take on Gibson so far is that we can model subjective experiences as actions rather than as passive representations. That means that something like meaning and understanding are actions and we need to think, what is this feeling of meaning for? My view is that it's a function related to social engagement rather than anything real.

    So with the man in the chinese room, we require the action of subjective meaning, which is an action in the same way as producing language is an action, except you can't see it. The action of subjective meaning has particular consequences. So, for example, if the woman outside passed in a message that her mother had died, the entire system inside the room - including the man and the book - would not only have to respond in an okay way to that message but they would have to be affected for subsequent communications. There would have to be sensitivity in the responses to later questions, even if they were on unrelated topics. There would have to be a change of tone or mood. A book couldn't handle those actions, and that is why the system is inadequate.

    4. n/a

    ReplyDelete
  10. 1. I'm a cognitive psychologist/scientist in my day job - embodied cog, non-representationalist.
    2. Yes I know about the SGP and tried to write about it in my PhD.
    3. My gut feeling is that it is a problem that arises when you begin by assuming that sub-personal mental representations exist and that they explain how mental content is 'stored' in our heads - so perhaps you only need worry about it if you think those assumptions are important for understanding cognition. Perhaps.
    4. Barsalou's perceptual symbol system approach (and perhaps Glenberg's indexical hypothesis too) try and solve the SGP by arguing that the 'conceptual' (offline cognition) is grounded in perception/action systems (online cognition) - the latter, somehow, don't suffer from ungrounded meaning so if you the conceptual just is the use of perception/action systems then the problem goes away. You see a similar kind of idea in some 'new AI' work (e.g., in some of Luc Steels' work). I'm really not sure about this solution though - it still assumes that high level, offline cognition (thinking etc) involves the manipulation and transformation of sub-personal, in-the-head representations. It seems to me that this is just too simple/simplistic a solution to the problem of advanced cognition which is hard won, cultural-linguistic achievement.

    ReplyDelete
  11. I think I'm closeest to Neil Rickert's position (if I've understood it).

    1. Psychologist with a background in cognitive science.
    2. Yes - used to teach it (in appropriate courses).
    3. No - I think it is probably a sign that philosophers haven't got it right - along with the related mind/body problem. I think that the frame problem is probably a more interesting problem in the sense that it has potentially more immediate practical implications.
    4. I think there are some good steps towards answers (but I'm not up to speed). I think any solution that completely rejects personal in-the-head mental representations is in trouble - as I'm more or less persuaded that dualism and relativism aren't viable.


    ReplyDelete
  12. 1. I'm a computer scientist with a lifelong interest in AGI and a long-time disagreement with the Chinese Room analogy.
    2. I think I know the symbol grounding problem but it is mostly an issue for brain models and AI programs that were never going to work anyway.
    3 and 4. If our brain model tries to represent outside reality explicitly, then it is doomed to failure. It makes the homunculus fallacy but at the level of individual facts rather than at the man-in-the-machine level. Early symbolic AI programs attempted to explicitly represent knowledge but discovered very quickly that such programs can't exhibit the complex function of human brains.

    The right starting point, IMHO, is to see the brain, senses, and muscles as a real-time program that produces output as a function of its inputs and its memory or state. In other words, we start with a black box. We know that this black box must contain some kind of representation of the outside world. Otherwise it couldn't perform the input/output function that it does. What is unreasonable is an expectation that we can peer into it and easily identify components whose input/output function we can easily understand. We have seen this problem when looking at nervous systems of simple creatures. We don't find their internal organization anything close to what an engineer or programmer might create. Of course, unlike engineers and programmers, evolution did not have that as a goal when it gradually designed these creatures. I'm sure the representation is present but just not explicit. It's just going to take a huge amount of work to figure it out. If we ever have the ability to read the brain's content, the function that does it will take brain state as input, understandable representations as output, and will be very complex, rivalling the complexity of the brain itself.

    ReplyDelete