Saturday, 19 May 2012

Language isn't magical (but it is special)

One of the most common comments about ecological psychology is that it's hard to imagine how it could apply to things like language. The sense is that language is a completely different kind of beast than perception-action and that it requires a completely different theoretical account (cognitive psychology). Andrew and I disagree. In this post I outline the similarities and differences between language and other types of perceptual information. The main idea is that language is indeed the same type of thing as perception-action, but there are key differences between them in the relationship between the information and what it means. These differences permit language to be flexible according to context, culture, and goals; to be expandable according to changing needs; and to be portable, allowing us to access information about things that are not currently in the environment. These properties make language special, but not magical.

Event Perception

Events in the world are defined in terms of their underlying dynamics. For example, two instances of a bouncing ball are instances of the same type of event - a bouncing-ball-event -  because the dynamical equations of motion are the same in both cases. The two instances might be different in their parameters (e.g. the initial height of the ball) but they are still examples of the same event in the world.

Events in the world create information. The light reflecting off a bouncing ball is structured according to the laws of ecological optics by the specific motion of that ball. This structure (optic flow) is specific to the event in question and any organism that detects this information can therefore directly perceive the event in the world. The meaning of the information in this case is the dynamics of the event in the world and this is the meaning that the organism must learn. If the organism can use this information to successfully control it's behaviour, we take this as evidence that the organism has access to this meaning.

Speech is a type of event (well, probably a series of nested events, but I'll get to this later). The act of speaking structures the acoustic array according to the laws of ecological acoustics. This structure is specific to the speech event, and any organism that detects this structure can therefore directly perceive the speech event. But in this case, the meaning of the information is not the dynamics of the articulation of the word. The meaning the organism must learn is the conventional meaning of the word that was spoken and, if the organism acts in a manner consistent with that conventional meaning, this again is evidence for access to that meaning.

From a first person perspective, both cases require learning the meaning of information. I argue that the mechanism of learning this meaning is identical for both types of event

If, for analysis purposes, we adopt a third person perspective it is possible to see an important difference between speech events and events such as bouncing balls. The difference is in the relationship between the information and what that information means. For the bouncing ball, the fact that the optic flow pattern means 'bouncing ball' is underwritten by the lawful process by which the ball's motion was projected into the optic array, and the form of the information therefore relates to the underlying event. For the speech event, the fact that the acoustic array pattern means, for example, 'Hello' is not underwitten by a lawful process, and
...there is no intrinsic similarity between the sounds of most words and their referents: the form of the word dog gives us no hints about the kind of thing to which it refers. And nothing in the similarity of the forms of dig and dog conveys a similarity in meaning.
Smith & Gasser, 2005, p. 22

Language as an information medium

For the sake of clarity, I will reserve the term perception to refer to the apprehension of structure in an energy array when the meaning of this information is underwritten by a specification relationship between the information and the world. Using this definition, hearing the word "dog" is not an act of perception. This is because hearing a spoken word involves the apprehension of structure in an energy array when the meaning of this information is underwritten by a conventional relationship between the information and the world. When I use the term perceptual information, I mean information whose meaning is underwritten by a specification relationship. Auditory information is about sounds. Visual information is about visual properties of the environment. In contrast, linguistic information (in whatever modality it is conveyed) is about the conventional meanings of linguistic events, which might refer to sounds, sights, ideas, etc.

Because the auditory events of spoken language are about the conventional meanings of linguistic events rather than the physical act of speech articulation, it may be helpful to think of language as its own medium. The medium of language permits the flow of linguistic information through the modalities of sound (speech), vision (writing, sign language), and touch (Braille). 

This difference is obviously important and the consequences of it have been noted by many linguists and psychologists. For one thing, humans are uniquely adept at using linguistic information. Learning the meaning of a linguistic event is more difficult than learning the meaning of perceptual information. I can say the word "dog" whether or not there is an actual dog nearby, but the sound of a dog barking is usually going to mean that there is a dog within earshot. Humans have some adaptations that help them with this problem. We are good at establishing joint attention to something in the environment. We are also incredibly motivated to communicate with one another, which means that we're willing to persevere with this difficult learning problem. The process of learning the meaning of linguistic information will be an important part of a research programme on ecological approaches to language. That said, the task analysis that I will undertake in the next post will focus on proficient language users rather than on beginners.

To summarise, then: the route from language to it's meaning is not underwritten by laws, the way the route from perceptual information to it's meaning is. But, critically, from the first person perspective of the organism, there is no difference in what it is interacting with, and I argue that the organism will therefore apply the same tools to both problems. The different outcomes reflect the differences in the routes to meaning, and not fundamental differences within the organism.

In the next post I will consider the types of tasks for which linguistic information might be useful. An actual research programme would select one specific task from one of these types, but I want to begin by thinking broadly about when and how linguistic information guides behaviour. I will also introduce the idea of perceptual-linguistic systems, which will be central to understanding how the meaning of a linguistic event is understood. Finally, I will discuss why this approach to language is explicitly non-representational.

References

Smith, L. & Gasser, M. (2005). The development of embodied cognition: Six lessons from babies. Artificial Life, 11 (1), 13-30.

40 comments:

  1. Hi Amy

    You asked:

    I see how your theory could account for learning a word like "dog," but how would it explain learning words like "promise?"

    https://twitter.com/#!/amy_tabor/status/204217380015054848

    The gist of your question seems to be this: There is an animal that we refer to with the word "dog." This animal is a thing in the environment, so in terms of word learning it is easy to imagine having enough stable perceptual experience with this animal that we can begin to link it to the word "dog." In contrast, a promise can't be seen - it's not a concrete thing - so it's harder to imagine having stable enough perceptual experience to learn the meaning of the speech event "promise."

    I can think of a few potential responses to this at the moment, and I don't know which is the most fruitful, so I'm just going to rattle them off and we'll see what you think.

    1) Although "dog" refers to a familiar animal, the word is used in a variety of ways. "Dog days of summer", "Dog-ear a page", etc. Wictionary provides 12 meaning for the noun form and 6 for the verb form. If we consider how flexibly we can actually use the word "dog" (the accounting of which would far exceed the number of discrete definitions it is assigned) it becomes much less obvious why this word should be somehow more amenable to this ecological account than the word "promise" (and at this point you might think my ideas are even less plausible than before you read this comment!). This is especially the case since "promise" is fairly well-behaved, having 2 meanings as a noun and 1 meaning as a verb. In any case, it is not clear that "dog" as a word with a variety of potential meanings is obviously more straightforward than a word like "promise", which actually has a fairly well-defined meaning.

    (cont'd)

    ReplyDelete
    Replies
    1. Random question that occurred to me reading this; are words like 'dog' more likely to have multiple meanings and uses than words like 'promise'?

      Delete
    2. Yes. The typical explanation is that concrete words like "dog" have more perceptual properties linked to them that can be used to ground figurative uses. This is a fairly cognitive take on the issue. I don't know yet how I'd account for the difference in the proposed framework.

      Delete
  2. 2) Although in the previous point I appealed to the number of definitions of a word to illustrate flexibility in word-meaning, I argue against thinking of words as having stable core meanings. For example, the Wictionary definition of "promise" discusses making vows and oaths as acts of promising. By this definition, simply declaring " I promise you that " constitutes a promise. In reality, our judgement of whether or not a promise has occurred is much more complex and dependent on perceptual an other factors (what I call perceptual-linguistic systems). Tone of voice (sarcasm, sincerity), situation (marriage ceremony, movie containing a marriage ceremony), and prior knowledge about the speaker, all participate fully in the creation of meaning during the speech act. This is equally true for the meaning of the speech act "dog." Even if we restrict this act to the noun form that refers to the animal, the actual referent of "dog" in a conversation could be a huge variety of things from a photograph of a dog, to a line drawings of a dog, to a cat dressed in a dog suit (in a line-up of cats dressed like other animals you can easily imagine someone referring to this as a "dog"), to a cloud shaped like a dog, etc. This is the type of thing that Smith & Jones (1993) refer to when they say that the thing that makes cognition smart isn't stability, it's flexibility. If we let go of the idea that words like "dog" are especially easy, then their contrast with words like "promise" is less apparent. Granted, children (Western, English-speaking children) are more likely to learn the word "dog" before the word "promise" and this is related to the fact that there is at least one stable perceptual referent for "dog" that we can label in kid's books. But, this early word learning is far from the eventual complexity that characterises adult usage of the word.

    3) In this approach to language, evidence of understanding word meaning comes from appropriate behaviour (either using a word in a successful communication or acting sensibly in response to linguistic information in the environment). Again, the intuition is that "dog" is somehow easy because it is a concrete noun and concrete nouns are "content rich". Abstract words like "promise" and "democracy" are often assumed to require representations because it is unclear to people how we can use these words appropriately if we can't learn their meanings from perceptual experience (I think event perception will be an important answer to this, but I'll get to that in a later post). However, it is clear that we are able to learn the meaning of many "content poor" words (in that we can use the words correctly) even though they are entirely without physical referents. For instance, words like "than" and "of" only serve to string other linguistic content together. They have functions, but not meanings, in the sense that nouns and verbs have meanings. Yet, we are able to use the words perfectly well, we miss them when they are absent, and we notice when they are used inappropriately. And yet, there is not a temptation to invoke representations to explain this ability (I doubt you feel strongly that you have a representation of the concept "of").

    ReplyDelete
    Replies
    1. I don't see why we wouldn't have representations of "of", for exactly the reasons you listed. It is a real word, with a syntax and a semantics and conditions of use. It's a projection of a functional head in a syntax tree. Why would anyone who believes in representations suddenly just drop the ball on small words?

      Also, you should look up JL Austin and the concept of "speech acts" in linguistics, because it relates directly to this post and why the word "dog" is different form "promise". The pragmatics associated with speech acts like promising are quite different than with simple referring words.

      Lastly, I want to say that you don't representation for words like "promise" because they are abstract nouns, you need it because you have to be able to compute the truth conditions and sincerity conditions of the sentence, with respect to a particular context and a particular world of evaluation.

      Delete
    2. Hi,

      I'm happy for people who believe in representations to also believe that function words have representations. You're correct in that there is not any necessary reason that they would not be represented (Dietrich & Markman would probably put them on the function end rather than content end of the spectrum). The point wasn't about representations, really. It was about our ability to use words correctly on the basis of their function rather than content. Evidence that we can do this quite well means that the fact that dog is more "content rich" than "promise" shouldn't influence how amenable these types of words are to an ecological explanation.

      I am familiar with Austin and I mention him in the next post. I like his idea of speech acts.

      In terms of whether representations are necessary, I encourage you to browse these posts for Andrew and my response to the idea (see below). But, I wonder what evidence you are drawing on that 1) humans actually do anything like computing truth and sincerity conditions (just these dimensions?) and that 2) these acts explicitly require representations (having ruled out other possibilities like dynamical systems empirically)

      http://psychsciencenotes.blogspot.co.uk/2011/11/robots-representation-dynamical-systems.html

      http://psychsciencenotes.blogspot.co.uk/2011/07/selection-of-problems-with.html

      http://psychsciencenotes.blogspot.co.uk/2011/03/chemero-2009-chapter-3-theories-of.html

      Delete
    3. "But, I wonder what evidence you are drawing on that 1) humans actually do anything like computing truth and sincerity conditions (just these dimensions?)"


      I think that truth conditions are central to explaining how meaning is derived in language. Knowing what a sentence means is knowing that sentence's truth conditions, in other words knowing what the world would have to be like in order for that sentence to be true. We know a sentence is true if its truth conditions hold in the actual world (and likewise we know a sentence is false when we know its truth conditions do not or cannot hold in the actual world).

      Simple example, you know what the completely novel sentence "Vampires usually get their coffee at 7-11" means, and you know that it's false, because you know what the world would have to look like for it to be true (vampires would need to be real, they'd would have to drink coffee, they would have to buy it more often at 7-11 than elsewhere) and you know that the world is not that way.

      Since natural languages are compositional (mostly), the meaning of a sentence is a function of the meaning of its parts and how they are put together. The semantics of a word is its contribution toward the truth conditions of the sentence. These truth conditions are computed from the semantics each word in the sentence using simple operations like function-argument application and lambda abstraction. (This is just basic linguistic semantics; for people unfamiliar here are some class notes I found through google: http://www.sfu.ca/~jeffpell/Ling324/fjpSlides5.pdf)

      You asked if I could "empirically" rule out any other approach. Before I can provide any evidence against another approach, I need to know how it handles truth conditions. Does your model of natural language meaning involve truth conditions, or not?

      If not, then what does it mean to understand the meaning of a sentence? How do we decide if a sentence is true or not?

      If so, then how are the truth conditions for a sentence arrived at? I'm claiming computation over abstract semantic units. I'm guessing you won't. ;)

      By the way, I found your site through the the somethingawful thread that adwkiwi has posted in (don't know if that's you). I'm user FoiledAgain. I have some questions for you about other topics in language too, especially your views of Poverty of the Stimulus. I'd be really happy to carry on the conversation over there too!

      Delete
    4. The idea that a language user must compute truth conditions is an assumption. For this to be an essential part of a theory of language use, supporters of this idea must establish that information exists for people to make these evaluations. If information does not exist to support this task, then it doesn't matter how intuitively appealing the idea is to some linguists, it cannot be part of human language use. The example of the vampire and his coffee is interesting with respect to the availability of this information. When presented out of a use context - that is, as an example in your argument - it seems obvious that it is false. However, I can easily imagine several normal scenarios where this utterance would be true. I could be referring to the Puerto Rican gang, the Vampires, I could be referring to people dressed as vampires (in the context of some type of convention where costumes are the norm), or I could be using vampires metaphorically (either as bloodsuckers, e.g., lawyers or creatures of the night, e.g., teenagers). These are perfectly average examples of language use and they highlight a fundamental limitation of the truth condition idea, namely that meaning resides in words rather than in contexts.

      Interestingly, it actually seems that language users first ASSUME the truth of utterances and then try to figure out how they make sense (it is one of Grice's conversational maxims that we assume things that are said to us to be true). This is useful or else people would have real difficulty with things that *could* be true or false but that we don't *know to be* true or false.

      Andrew is always eager to talk about the idea of Poverty of Stimulus and how that assumption has led cognitive psychology astray - I'm sure you can get him going on this!

      Delete
    5. You didn't directly answer my question, but I think the answer is "no". Your model doesn't need/include truth conditions. Do I read you properly?

      My original follow-up still stands: without them, how you understand what a sentence means? My claim was that understanding what a sentence means is knowing that sentence's truth conditions. Fill in the blank for your model now: A speaker understands the meaning of a sentence when ________________. (I don't mean to actually confine you to one sentence, I'm just trying to give you an idea of the answer I'm looking for.)


      "The example of the vampire and his coffee is interesting with respect to the availability of this information. When presented out of a use context - that is, as an example in your argument - it seems obvious that it is false. However, I can easily imagine several normal scenarios where this utterance would be true. ... These are perfectly average examples of language use and they highlight a fundamental limitation of the truth condition idea, namely that meaning resides in words rather than in contexts."

      What you are highlighting here is hardly a challenge to truth conditional semantics, it's a well known issue to anyone who works on natural language meaning, and the general solution is taught in undergraduate courses. Do you seriously think that linguists would somehow miss this very obvious fact about their main object of study?

      Contextual variation is "hard-coded" into the formal denotations of words. The formal denotation of "vampire" is :

      [[vampire]]c,w = λx. x is a vampire in c in w.

      There are supposed to be some superscripts and subscripts in there, but I don't know how to do that on this blog. Read more informally, it says "the denotation of the word 'vampire' with respect to a context c and a world w is a function from entities to truth values which returns true for any entity x such that x is considered a vampire in c in w." So if the context is such that we're talking about that gang, then the above evaluates to true. (Well, not so simply. It doesn't evaluate "in place", it gets carried up the tree and becomes part of the truth conditions of the sentence. This is done through a few elementary operations explained in the link in my last post.)

      "they highlight a fundamental limitation of the truth condition idea, namely that meaning resides in words rather than in contexts.""

      I hope my sketch of the word "vampire" explains this a little more. Linguistic semantics allows for meaning to be derived both lexically and contextually.



      "Interestingly, it actually seems that language users first ASSUME the truth of utterances and then try to figure out how they make sense"

      "figuring out what it means" and "assuming it's true" are, for me, the very same thing. To know what something means is to know its truth conditions. If you are assuming it is true, then you already know what it means. I don't see how you can split this into two different processes.

      Delete
    6. "Interestingly, it actually seems that language users first ASSUME the truth of utterances and then try to figure out how they make sense"

      Upon further inspection, I appear to have misread you. I apologize. You actually said "how they make sense", and I replied as if you said "what they mean". Given your more broad wording, I agree with you. How a sentence "makes sense" could include implicatures, presuppositions, expressive content, etc. in addition to the "bare" truth conditions.

      Though I agree with you there, it leads to another "conflict" of theories, as to how you that extra information is derived. A representational approach to this would be something like Discourse Representation Theory (http://plato.stanford.edu/entries/discourse-representation-theory/). Indeed, the discourse phenomena that are meant to be explained by DRT, like anaphora and presupposition, seem like very difficult things to explain without recourse to representation. Have you done/would you consider writing a post about discourse?

      Delete
  3. Hi, I would like to ask if you (Andrew and Sabrina) could tell me the main differences between Ecological Psych and Radical Behaviorism.
    Thanks!

    ReplyDelete
  4. Anonymous,
    There are many differences between ecological psychology as conceived by Gibson and Radical Behaviorism as conceived by Skinner. This is a long discussion. One problem is that there are a few versions of each of these systems. Though Radical Behaviorism is now often considered synonymous with Skinnerian Behaviorism, there were several other candidates, and, of course, any all scientific systems are evolving.

    There are core similarities though. Both descend (historically/intellectually) from the lineage of American Philosophy, i.e., Pragmatism and Radical Empiricism. You can pick up a bit of that here. Or, if you just want a quick intro to eco psych, look here.

    ReplyDelete
  5. Amy (assuming you are reading this),
    In a system like this, no word 'means' anything, so the questions is only how you come to use the word and how others come to respond. So, even though we might ask for an explanation for people's use of the word "promise", that must be a short hand for asking why the word is said by specific types of people in specific situations. (You know, because the word is used in so many different ways, and we could presumably have a different explanation for each use.) So... to get us started, I'll provide the specifics: Why would a person say "I promise I will take out the trash?"

    Now that we are more specific, I don't really have an answer :- )

    Skinner (1957) created a category called "autoclitic" to handle words like "promise" in situations like this. One large category of autoclitics are words/phrases that modify the strength of another part of what was said. In that case, "promise" is simply a stronger response that "I will take out the trash", which is itself a stronger response than "I think I will take out the trash". The analogy is to a forceful vs. normal vs. weak pushing of the lever (or any other operant behavior).

    This is brilliantly clever, but I don't know if it stands up, and I don't know what the current field of verbal behavior analysis thinks about that aspect of Skinner's work.

    ReplyDelete
  6. Anonymous,

    That's a good question, and if it had occurred to me I would have put something about this in the introduction post. There are many differences between radical behaviourism and ecological psychology, but two of these are particularly salient for a discussion of language.

    First, ecological psych includes a theory for how information enters the system via perception, while radical behaviourism does not. Ecological psych's idea of event perception is critical to understanding language because it opens up the possibility of many different layers of structure that can convey meaning in a speech event. For example, there are event structures for words, but there are also probably (and this is open to rigorous testing) structures for sentence types, moods, conversation types, etc. Ecological psych also tells us how to look for these (instances of the same event will have the same dynamics). Without a theory of perception, radical behaviourism isn't quite sure what information is available to support language. The obvious candidates are words, but words are only a part of the story and no account that focuses on words alone will be able to explain the complexity, flexibility, and unreasonable success of language.

    The second difference between radical behaviourism and ecological psych that is relevant to language is the level of behavioural analysis. Radical behaviourism tells us that the response to a stimulus depends on our history with that stimulus. Using these principles, we can train people and other animals to do an astonishing variety of things. For instance, we could probably train a rat to ride a tiny bicycle. But, radical behaviourism doesn't explain HOW the rat comes to learn to ride the bicycle. Ecological psych fills this gap with the theory of information and perception-action systems. With ecological psych we can figure out what perceptual variables the rat uses in the continuous control of her behaviour while riding a tiny bicycle.

    These two differences between radical behaviourism and ecological psych make the latter a much better candidate for studying language. It gives us a theory of information , which we can use to identify the event structures (beyond just words) that convey meaning during speech events. It gives the proper priority to perception, so that we can appropriately identify the types of things language can be used for (and the types of things it cannot be used for). And, it explains how information can be used in the continuous control of action (with the support of the dynamical systems literature).

    ReplyDelete
  7. Eric said:

    "Skinner (1957) created a category called "autoclitic" to handle words like "promise" in situations like this. One large category of autoclitics are words/phrases that modify the strength of another part of what was said. In that case, "promise" is simply a stronger response that "I will take out the trash", which is itself a stronger response than "I think I will take out the trash"."

    I am not a fan of this idea. I've worked a little on some computational linguistics programmes that used this notion to develop a full vocabulary using only a set of universal semantic primitives (a la Wierzbicka). The hope of the computer programmers on the project was that we could describe all verbs in terms of vectors where each cell corresponded to a value on a particular dimension, like strength, with respect to a primitive verb. The idea is intuitively appealing. One problem is that there is no principled reason for particular words to have particular dimensions. The relationship between physical forces and words like "promise" versus "intend" is obviously only metaphorical and the fact that we use one metaphor rather than another is related to culture/convention rather than some natural fit between them. This means that autoclitics can only be used to describe existing phenomenon. We can't look at a set of unfamiliar, but semntaically related words, and generate any sensible hypoteses about how these might related to one another in the language of physical forces.

    Another problem with autoclitics is that the idea strips out the nuance in meaning. The difference between "say" and "shout" is more than just one of valence, even if that is one way to describe the difference. "Shout" isn't just "say" turned up. It has its own emotional connotations that are not rooted in a magnitude difference with "say." This is why the phrase "to shout" is not equivalent to the phrase "to say extremely loudly."

    ReplyDelete
  8. I wonder to what extent this is 'translating' other areas of psych (e.g. sociocultural psych, discursive psych) into the language of ecological?

    Also I know Andrew read one of Andy Clark's books, forget which, but he has a nice analogy on p.81 of 'natural born cyborgs' of the Mangrove Swamp from which meaning is emergent from language (rather than labelled by it). He's also written some articles (e.g. I think one called "Magic Words") which may relate to some of these issues.

    ReplyDelete
    Replies
    1. I haven't read Natural Born Cyborgs, but I have read other stuff by Clark and I know of the mangrove example you mention. I like the idea that using language opens up new possibilities for thought - the act of saying something literally changes the landscape of possible subsequent thoughts and utterances. I like the spirit of Clark's take on language, but we differ in a number of ways.

      First, Clark is happy to talk about cognitive computation and I reject the idea that cognition is computational and the information-processing baggage that goes with it.

      Second, and relatedly, Clark allows for representations in certain circumstances, which I think is a mistake.

      A third difference is that Clark is tackling language from further up-stream than I am. By applying an ecological framework to language I am trying to account for the basic perception of speech events and how these events come to have meaning for the perceiving-acting language-user. Clark assumes these steps and then focuses on the function of public language for cognitive agents.

      I did check out that Magic Words paper, and his breakdown of the 6 things that language does is something I'll certainly return to - I might even revise the next post in this series with these ideas in mind, so thanks for directing my attention here!

      Delete
  9. "The meaning the organism must learn is the conventional meaning of the word that was spoken ... linguistic information (in whatever modality it is conveyed) is about the conventional meanings of linguistic events"

    This seems sensible, but haven't you just admitted that words have "meanings" (i.e. they point to something in the world, without being it) and we learn those meanings... in other words, that our minds contain representations of the world?

    In which case - haven't you thrown out radical embodiedment?

    To put it another way, many people would say that as soon as you admit that words have meanings, you have become a representationalist.

    And (I'm no expert) but didn't the later Wittgenstein specifically deny that words had meanings (in the conventional sense anyway) - for exactly that reason...?

    ReplyDelete
    Replies
    1. Words having meaning doesn't necessarily imply we have representations in our mind (althought I, unlike the authors, believe we do).
      Don't forget words are to a certain extent external to our minds (language doesn't need me to exist). That's what Wittgenstein pointed to when he related meaning to intersubjective situations where language games occur, the key word to me being intersubjective.

      Delete
    2. Stay tuned for the next post where this is clarified! In a nutshell, the idea that a speech event conveys a conventional meaning does not imply representations or stable core meanings similar to dictionary definitions. It only tells us that whatever meaning is conveyed is related to the speech event through convention rather than specification. Because we are going to take appropriate use as evidence for access to meaning (similar to appropriate action as evidence for access to the meaning of perceptual information), there is no need to assume that we are using stored meanings to understand speech events - we just need to be able to use them / respond to them properly. In the next post I explicitly discuss the apparent contradiction between a speech event conveying meaning and taking a non-representational stance. I think what I've come up with holds together, but I will be very interested to read what you think about it.

      Delete
    3. Great - looking forward to it!

      Delete
    4. Just to emphasise; when I study the perceptual basis of skilled action, I generally want to measure whether someone has perceived a variable using action. For example, while the rhythmic movement stuff that led to the model does include judgment studies with no action component, they were always intended to be replaced by experiments where we have people move and measure how that changes in response to manipulations of information. The measure of whether someone has perceived what I think they have perceived is therefore whether or not they acted as if they had perceived that information.

      Sabrina's move is to apply exactly this logic to language; not by analogy, but on the premise that the same logic holds for language.

      Delete
    5. OK. What I think you're getting at (correct me if I'm wrong?) is something like this:

      "If someone can use the word 'dog' to usefully talk about dogs, then they can be said to know the word 'dog'. 'Dog' has no meaning that sits in the brain, a phantom picture of a dog. That would be a representation of a dog, and I claim it is nothing to do with being able to use the word 'dog'".

      ?

      Delete
    6. Neuroskeptic,
      One way or another this is all growing out of pragmatism. In pragmatism (roughly speaking) any 'thing' is 'the consequences of the thing.' So, if we know the consequences of 'knowing the word dog', and we observe those consequences in a person, the that person 'knows the word dog'... and that's all there is to it. The remaining question is 'what are the consequences?'

      It is a difficult question to see through to the end, but we can get a start pretty easily.

      Surely when teaching someone the word 'dog' our main criterion for judging whether or not they 'know the word' is observing their use of the word and their response to our use of the word. We never actually assess whether they have an image in their head, so it would be totally disingenuous to claim that was the criterion.

      If this ever erupts into a full debate about linguistics, one problem will be people talking past each other for lack of understanding the very different underlying philosophies that are in play before we even get to the linguistic issue. This can be seen in debates about eco psych from the '70s and '80s, and people are only now beginning to understand what the real confusion was about.

      Delete
  10. sjgknight has a point imho. One is baffled by the continuous reinventing of things... Btw Skinner's autoclitics are for instance remarkably similar to the medieval modus vs dictum distinction. I'd like to come back to this thought at the end.

    But I have a more precise remark concerning your approach. To explain my point, I'd like to introduce a classical dichotomy in the discussion: (1) language as a subjective activity of coding/decoding/understanding (Saussure's 'langage') versus (2) language as a social construct (as in the expression 'the english language'), where one may assume that many subjective processes resulted in a functional entity (which corresponds to Saussure's 'la langue')

    It seems to me your approach has a huge blind spot for everything related to the latter. It's quite apparent when you dismiss the computational research on 'vectorizing' Wierzbicka's primitives as 'cultural'... You seem to think that cultural phenomena are irrelevant to psychology because they are not universal. But one might argue that even if no semantic primitive is really universal, all languages have some kind of primitives and therefore a certain cultural 'tendancy to have primitives' is universal... that would make it very relevant to studies of the mind.

    To me, it's as if you claimed that the study of chemistry renders biology irrelevant because all biological phenomena are in the end chemical. Yet obviously the higher-level constructs studied by biology have a reality of their own, with laws and properties that can/should be studied as such. It doesn't always involve going all the way down to the chemical properties.

    In other words, I claim that cultural/conventional phenomena in language have rules of their own. You could see language as a shared (social) construct created by the speakers. The goal would be to objectivize their subjective understanding into an intersubjective construct or artifact. One of the properties of this social artifact is to classify the reality into shared categories. In that view, trying to find major emerging dimensions within the categories, which your computational linguists were trying to do, seems quite an interesting approach.

    So I don't say your view is 'bad', I'm just saying it tackles low-level routines that may assemble into higher-level phenomena when you move from the 1st person view to the cultural constructs.

    An interesting question then is: What are the interactions between the lower level perceptual mecanisms and the higher level semantic constructs? To tackle that, I'd like to go back to the modalisation/autocliticity discussion. Some linguists argue that the binary opposition you noted between 'autoclitics' (aka modalisers and grammatical words) and lexemes (content words) is just the simplification of a continuous gradient between two poles. A theory called grammaticalization explains that content words can (under certain circonstances) have their meaning become less and less concrete and more and more grammatical. An example would be low-latin 'casa' (=house) becoming french 'chez' (=at). This process, among many others, indicates they are degrees of universality in the semantic constructs. In the context of your thoery, it could perhaps be interpreted as the tendancy to re-create important general patterns from the 'small change' of available content words.

    ReplyDelete
    Replies
    1. 'Representation' doesn't mean the same thing for linguists and for psych theories...

      Also i think what you're just saying is that language is another affordance :)

      Delete
    2. I'll take this comment a bit at a time.

      First, I am not dismissing cultural influences on language. I am a big fan of Wierzbicka and I did a degree in anthropology before I came to psychology. In fact, a background in cross cultural linguistics partially informed my perspective on language that I describe here. Particularly, our western, modern, english-speaking experience of language entails things like a familiarity with dictionaries that pushes psychologists with this background to think about language as consisting of elements with stable core meanings (an idea I dismiss in the next post). Culture is enormously important and is routinely ignored in psychology.

      My problem with the approach I mentioned wasn't that it involved culture. One of my issues was that it is impossible to use the idea of autoclitics to make predictions about what dimensions a word might have - you can only identify possible dimensions based on your own or someone else's experience with a language. The other problem is that there is no theoretical basis to think this is how we understand word meanings. Being able to describe a verb with a vector might be useful to psychologists or linguists, but there is no theory to suggest that this description has any resemblance to the actual process of language comprehension.

      Delete
    3. I agree entirely that there are subjective, social components of language use that can't be dismissed by focusing on "lower levels" alone. In the next post I talk about how many of the tasks for which language is useful can only work because of the types of animals we are (e.g., highly motivated to interact with one another, sensitive to hierarchies and rule-breaking, eager to please). I think that this level of analysis will be absolutely essential to understanding how language succeeds given that meaning is only related to speech events via convention rather than specification.

      My goal with this analysis is to start from the ground up. Too many attempts to understand language (e.g., Chomsky) have neglected to consider what information is available in the environment to support language comprehension. A successful theory of language must grow out of a successful theory of perception.

      Delete
    4. Thanks for your answers and my apologies for over-simplifying your view about the role of the cultural.

      I admire that you stressed the importance of clear theoretical foundations. I'd feel totally unable to do that... But this endeavour is also important for your neighbors in linguistics. However a theory of perception is certainly not the only foundation linguistics can rely on.

      A lot can be done using corpus and corpus distributions, and considering language as a disembodied structure. And I don't mean Chomskiism, I mean the advances in computational linguistics... You said there's no theory to relate computational text mining approaches to comprehension processes? Granted, but there's an intuitive path throught several theories. Here is a part of it, as I see it in my non-scientific way:
      1) corpus of texts is a form of externalized viewpoints, statements about the world through the prism of a shared vision [Foucault 1969]
      2) NLP allows us to do observations on (1) pertaining to discourse and syntactic structure [text mining]
      3) this in turns provides us with probabilistic word to word relations (unstable, I grant it)] [semantic graphs]
      4) its structures are akin to the global semantic structure of word relations
      5) data mining gives us clusters of emerging from (4), these clusters being related to semantic primitives (as per prototype theory, [Geeraerts 1987])
      6) from clustering to dimensionality there's only a small step in terms of mathematical formalism

      You may say this relates vector semantic studies to language use, but not language comprehension... True! I'm just trying to supply you ideas from other fields...

      Still it's an interesting fact that emerging dimensions in that kind of distributional semantics approaches revolve around the human actions and body (as do the sets of primitives à la Wierzbicka or central metaphors à la Lakoff)

      So in the end, my intuition is that the psychological stance (ego's embodied viewpoint, like an insect perceiving only heat and acting on it) goes throught a disembodied phase when it is ex-pressed (language as an artifact external to me, with representations I inherit from the community). But over the ages this structure must take the shape of it's original embodiment (language as a configuration of available lexical relations revolving around human-centered primitves) and comes back to embodiment (language use/comprehension as if words were another perception).... but it's difficult to formalize this connection/cut/reconnection process, sorry!

      In a nutshell not only has environment been neglected, but language has a dual (internal/environment) nature due to ways humans interact, and it has it's own structure because it's an artifact, and this massive scale structure itself is modeled after the embodied needs of its users. Regularities in the externalized phase, THAT's what I call 'representations'. It's not contradictory with your view at all, it's just an important addendum.
      That said, it's great to have dedicated people like you work with such rigour on foundational aspects and I'm very happy that you do it in an internet kind of way. And happy to have found your excellent blog... I'll stay tuned to read your next posts.

      Delete
    5. I'm glad you're enjoying the blog and I definitely appreciate different perspectives on this content. I have a lot of respect for NLP and the amazing amount of information available in the structure of language. I spent a few months last year working on a system that would tag POS, identify multiple word meanings, and create individual lexical neighbourhoods for each meaning. It is astonishing how far you can go with this approach (of course this is what Landauer and Dumais thought when they claimed that LSA was an analogue to language learning). I certainly keep this in mind when I think about what structures are available in language to be perceived and, therefore, to convey meaning. I also agree that language is an artefact and a tool (a la Daniel Everett) and I think this notion works well with the functionalist perspective I've adopted.

      I like the way you summed this up "...[language] has it's own structure because it's an artifact, and this massive scale structure itself is modeled after the embodied needs of its users". I do see where you're coming from in terms of representations in the external phase. Since "representation" is such a baggage-laden word in cognitive science I choose not to use the word at all.

      Delete
  11. Thanks for your answers... Sorry if I underestimated the importance of the cultural for you. I'll be happy to read your next post (yet I'm still convinced there's a dual embodied/disembodied nature of language...)

    ReplyDelete
  12. Well, one big difference between what you are trying to do and what Skinner was trying to do is the organism of interest. Skinner called his book 'Verbal Behavior' because it was about the behavior of saying things (writing things, signing things, etc.). It was not a theory of language, but an extension of operant theory to explain why certain people say certain things in certain situations. Your theory, it seems will be focused on perception, i.e., on why listeners do certain things, including responding with more words.

    This should lead to a difference in perspective, but whether it leads to incompatibility is a separate issue. Also, even if it incompatible with Skinner's approach to verbal behavior, that doesn't mean it is incompatible with radical behaviorism. So far as I can tell, the first use of 'radical behaviorism' was by Mary Calkins in 1916, long before Skinner came on the scene.

    Eric

    ReplyDelete
    Replies
    1. As always, Eric, I appreciate your thorough knowledge of history!

      Delete
  13. the questions is only how you come to use the word and how others come to respond

    I'm curious how far Eric, Sabrina, or anyone else is willing to go down this path. My inclination is to go pretty far (perhaps too far?) Hence, although being on-board with the claim that "the mechanism of learning this meaning is identical for both types of event" (those that unfold in accordance with natural laws and those that unfold in accordance - more or less - with social conventions, eg, language), I'd go even further.

    I take the meaning of a linguistic event to be the response (possibly latent) intended by the linguistic agent. So, the agent's objective is not to convey "information" to someone but to effect action by someone, immediately or subsequently. If the target of the event responds as intended, the meaning was "understood". That view seems to extend easily to the concept of the "meaning" of any stimulus caused by an agent, eg, perceiving a bouncing ball hit by a tennis opponent. And it can be extended to allow attribution of "meaning" to an agentless event by assuming perfect "understanding" on the part of the perceiver so that the virtual meaning of the event can be interpreted as being whatever response results.

    In developing tennis skills, one has to learn to produce a multitude of instances of a bouncing ball in an attempt to cause desired responses from an opponent - ie, to master the "meanings" (in the above sense) of produced bouncing tennis balls: those hit to/from the forehand/backhand, with/without top/back/side spin, long/short or high/low on fast/slow surfaces, etc, all in a multitude of complex and rapidly time-varying contexts. Is the process of learning how to play tennis at a given skill level really dramatically different from learning to play a similarly challenging language game with comparable skill, ie, to skillfully produce linguistic "bouncing balls"? In both cases the production clearly needs to be "flexible according to context [] and goals [,and] to be expandable according to changing needs". I'm less clear whether in either case they need to be "portable" in the sense of allowing a player to "access information about things that are not currently in the environment". The intentional idiom is convenient for those who insist on including mentalese in their vocabularies, and background knowledge of a tennis opponent's behavioral patterns can be an advantage. But is either necessary? Finally, I can't quite parse "flexible according to [] culture" and therefore have nothing to say about that proposed distinguishing feature of linguistic events. (All of this also applies, of course, to responding to stimuli.)

    I fail to appreciate the significance of the Smith and Gasser quote. While an utterance considered only as an abstract sound obviously conveys no information about its intended referent (if any), that seems quite irrelevant to meaning. At the basic level appropriate to consideration of "dog", if we insist on ascribing meaning to the stand-alone word, it is initially merely an association between simultaneous experiences of neural activity due to visual stimulation consequent to light reflected from a present dog-object and aural sensory stimulation due to utterance of the word. Only later does meaning in the more complex sense of tool use emerge. Why in either case would anyone expect a sound per se to be similar (using any measure) to an object? In any event, this evolution of skilled word usage seems to me to parallel closely the development of a skill at handling bouncing tennis balls.

    ReplyDelete
    Replies
    1. Hi Charles,
      I agree that we might want to go pretty far down the road of construing linguistic meaning as action-based - we want something in the world to change, whether in the environment, in someone else, or in ourselves. The thing I'm trying to work out right now is how this construal accounts for our subjective experience of being able to reflect on words and consider their meanings (e.g., I can think of the word dog and then say to myself that this is an animal with four legs and fur). How can we use functionalist language to talk about these experiences? I don't doubt that it's possible, but I haven't come up with anything yet. I really like the tennis example you give. I think it does a good job of illustrating the trade-off between flexibility and stability that characterises word use, but does it in the context of action.

      I also agree that "information" is a bit problematic, but, like Andrew, I'm fine with it as long as we're disciplined about how we use the word.

      The "flexible according to culture" phrase reflects the fact that language isn't just giving voice to some universal human way of thinking about the world. One example can be seen in the post I wrote awhile back about the (non)universality of the mind (http://psychsciencenotes.blogspot.co.uk/2011/11/how-universal-is-mind.html). So, I stuck that phrase in their to remind myself that I need to think about languages besides English when developing these ideas.

      The Smith and Gasser quote is just to illustrate the distinction between perceptual information (specification relationship) and linguistic relationship (conventional relationship). The idea that a word becomes associated with a stimulus is quite a familiar one, but I don't think it holds up well. For one thing, I don't like the implication that you have two things - a word and a meaning - and that these two things have to become associated. The whole ecological idea is that with learning you get direct access to meaning via information, not access to a separate associated meaning. For another thing, the idea works best for concrete nouns that we have lots of direct experience with. But, we can easily talk about things that we've never seen (I've never been to New York, but I know how to use it in a sentence), about impossible things (unicorns), and about things that don't have clear perceptual referents (hope). This is well-worn terrain and much of cognitive linguistics has devoted itself to coming up with explanations for how we do these things given the assumption that basic word learning is grounded in associations with environmental stimuli. I think a function-based perspective that assumes that a trained language user has direct access to meaning can bypass these problems. In the next post I introduce the idea of perceptual-linguistic systems. I will argue that, for language, meaning resides at this system level rather than chiefly with individual words. Of course, I still need to figure out how to talk about our subjective experience that individual words have meaning...

      Delete
    2. I don't see why the fact that you can "think of the word dog and then say to myself that this is an animal with four legs and fur" is a problem. You have learned to respond to a large variety of external, context-dependent stimuli by uttering sentences that use the word "dog". Ie, those responses are in essence latent (see note below) and awaiting an appropriate stimulus to cause execution. I think of thoughts as including covert execution of latent utterances (ie, saying them to oneself), possibly in response to stimuli that are partially or entirely internal. Your quoted statement began with the implicit assumption that the thought "dog" that initiated the internal description of a dog just popped up. But I think of the process as continuous, so that the thought "dog" also must have been a response to some stimulus.

      And you may have misinterpreted part of my comment. Just to be clear, I'm completely in the Wittgenstein-Davidson-et al camp in considering that only sentences have meaning. I'm assuming that at the early stages of development a child merely learns to associate aural and visual stimuli. Later, the child learns to mime the aural stimulus by uttering a replica of it. Even later, the child learns that certain utterances can cause entities in the environment (AKA, care givers) to behave so as to satisfy the child's desires. Only at that point do those utterances begin to acquire meaning (in the sense of intended responses by a hearer), and then only if interpreted as one word sentences, ie, commands.

      In this utterer-response (or more generally, actor-change) approach to meaning, "truth" has no role; even lies, or statements about nonexistent entities like vampires, have meaning. In fact, I wonder if "meaning" isn't another word that's more trouble than it's worth, at least in formal conversation. Couldn't we just directly address the change that an actor intends to effect?

      Note:

      Here "latent" is in the sense of "resonant neuron networks" as we discussed in an exchange last year. I didn't quite follow your exchange with afauno, but the numbered items in afauno's 5/21 20:19 comment seemed somewhat reminiscent of the network idea. The "corpus" in item 1 might correspond to the learned verbal stimulus-response pairs that comprise the network. The "semantic primitives" of item 5 might correspond to the "stored" stimulus-response pairs (analogs to filters matched to specific verbal stimuli and causing specific responses). I'm guessing (based mostly on a knee-jerk response to the word "vector") that one benefit of a "vector semantic analysis" is to provide a concept of distance. This is provided in the network paradigm by the curve of responses to stimuli that deviate from the stimulus to which the filter is matched. However, the two ideas diverge in that network paradigm is intended to suggest how language comprehension might work.

      Delete
  14. Nitpicking:

    1. While I understand that the concept of stimuli having "information content" is central to eco-psych (or at least the version to which you and Andrew subscribe), "information" still strikes me as being one of those overloaded terms that may cause more trouble than it's worth. Eg, isn't it likely to lead the unwary to think in terms of representation and computation - both perhaps required for third-person analysis and simulation but not necessarily for first-person implementation? Why not something more neutral like your own "[dynamic] structure in an energy array"?

    2. Trying to distinguish law-driven stimuli and language by considering the latter to be a "medium" seems potentially confusing. In the post, transmission of language is said to be via "modalities", thereby leaving "medium" available for describing language itself. But I think most comm theory people would consider such a physical "modality" to be the communication "medium". In terms of layered comm protocols (which - intended or not - is the flavor of that part of the post), the physical layer is at the bottom of the protocol stack, and language arguably isn't even in the stack. Consistent with the protocol stack metaphor, one could indeed call language the "medium for meaning", but I see no benefit to doing so that would offset the possible consequent confusion.

    ReplyDelete
    Replies
    1. isn't it likely to lead the unwary to think in terms of representation and computation - both perhaps required for third-person analysis and simulation but not necessarily for first-person implementation?
      This is always a risk, and part of our job is to maintain discipline. Language isn't my area and I fall into essentially representational turns of phrase really easily as Sabrina and I have talked about these posts, so it's easy to do!

      Why not something more neutral like your own "[dynamic] structure in an energy array"?
      I'm working on a series of posts on specification and some recent work on why you might not need it for direct perception (Withagen, Chemero etc). One of the moves (Withagen & van der Kamp) is to actually separate the term 'information' from the structure in an optic array, and only let structure become information when it is involved in a particular perception-action context by someone trying to do something in particular. It's entirely possible this might work, although I'm still reading and preparing on this one. Anyway, I just wanted to highlight that this is certainly an option floating around.

      Delete
    2. Despite what I said earlier, I kind of like the idea of referring to information with a more neutral word or phrase. I think "structure in an energy array" is nice and straightforward. The thing that "information" brings is the assumption that the structure in the array is relevant to the task at hand (information for..) and potentially perceivable. There is lots of structure that doesn't meet these requirements.

      Delete
  15. In other words, I claim that cultural/conventional phenomena in language have rules of their own. ... One of the properties of this social artifact is to classify the reality into shared categories.

    afauno -

    In case you don't already know, Donald Davidson has addresssed this idea at length, most notably here.

    ReplyDelete