Sabrina has been working on a series of posts on an ecological analysis of language (here, here and here, plus more on the way). Her focus has been on the nature of the information for language, and the similarities and differences this information has with the information for perception. We're working some of this analysis into a paper, and writing that got me thinking about this in a little more detail.
Our main move on language is to reject the assumption that language is a qualitatively different kind of task than perception & action. The goal is to find ways to talk about these behaviours using the same basic analysis tools. Part of that is to draw the analogy to how perceptual information gets its meaning and use that to describe how linguistic information gets its meaning.
What I want to do here is just map this analogy out a little, because I ended up in an interesting place and I want feedback from people who know more than us on this about whether this is just plain crazy. In particular, if you know anything about the relationship between neural dynamics and the dynamics of speech, we think this is going to be relevant!
How perceptual information gets its meaning
When we talk about meaning, we're asking how an organism can come to learn what an information variable is information about. Perceptual information is about the underlying dynamics of the event that created the information.
What I mean by this is that events in the world can be distinguished and identified only in terms of the dynamics. A dynamical description is one that describes how something changes over time, and which includes reference to the underlying forces that caused that change. A fly ball in baseball looks and acts the way it does because it is an example of the projectile motion dynamic. The dynamical equation describing projectile motion events includes terms for the size and mass of the object, the initial speed and angle, gravity, drag and air resistance. You can use this description to plot out exactly how the position of the ball changes over time.
A dynamical event such as a fly ball creates information by interacting with, say, light; but this information is only kinematic, not dynamic. A kinematic description of an event is one that describes how something changes over time, but without reference to the underlying forces. In practical terms, this means that you can use variables like time, position, velocity and the other temporal derivatives of position but you can't use variables that include mass or force. The visual perceptual information for a dynamical event is therefore a pattern in the optic array that can be described in terms of things changing over time.
It turns out that it is possible for a kinematic pattern to specify a dynamic property. What this means is that an aspect of the dynamical event creates one and only one kinematic pattern as it unfolds over time. If this is the case, detecting the kinematic pattern is equivalent to perceiving that aspect of the dynamic event, and this is the mechanism for direct perception of the world.
Information is all we have access to, and you never get to peek behind the curtain to check what the dynamics are up to. So in order to learn what a given kinematic pattern means, you have to use that pattern to control some action. If that pattern lets you, say, intercept a fly ball, then that pattern comes to mean the catchable-ness of the ball (the affordance). In other words, perceptual information comes to mean the dynamics of the event that created the kinematic pattern.
How linguistic information gets its meaning (the analogy)
Linguistic information is also created by a dynamic event, but a much more complicated one. Take speech (but the idea works just as well for writing and gesture). The information that is created is kinematic patterns in the acoustic array. These patterns are caused by the underlying dynamics of articulation (how the lips, tongue and vocal cords change over time). However, and this is a big however, linguistic information does not come to mean the dynamics of articulation. When you detect a pattern in the acoustic array, you don't perceive that your conversation partner's throat is up to - you perceive the meaning of the word that was produced.
Remember, the goal is to apply the analysis of how perceptual information gets its meaning to how linguistic information gets its meaning; but we've run into a mismatch. My solution is to remember that the dynamical system producing speech is actually much more than just the articulators. A critical player in speech is the brain, and one of the main reasons the articulators move the way they do is that this is what happens when you couple the neural dynamics of language to an articulation system.
The crazy notion that emerges from this analysis is that linguistic information comes to mean the dynamics of the broader system, the dynamical system formed by the coupling of language related neural dynamics to an articulation system. This means the analogy holds (the kinematic information is about an underlying dynamical event in the world).
Initial problems
For perception, events such as projectile motion have the dynamics they do because of physic (see Turvey, Shaw Reed and Mace, 1981 for the details of this analysis). The dynamics of projectile motion is simply how you describe how an object changes it's state over time when it has been fired off with an initial speed and angle and then left to do it's thing.
This is not true for language. Why do the neural dynamics have the form they do? One crude answer from applying the analogy is that they are like this because that's how you describe what an extensively trained nervous system changes it's state over time when it's producing that sentence rather than another. Obviously this isn't all that satisfactory, but it's all I have just now.
It's also even more complicated than this, because the dynamics from which linguistic information arises also includes the conversational and social context, etc etc. It's possibly an intractable mess, although people are applying dynamical systems to all kinds of tasks these days.
So this analogy only gets us so far; but it does push the ecological analysis quite a long way into the problem, which I like.
The coupling between neural and articulation dynamics
There is apparently a bit of a literature on this (thanks Tom Hartley and Jon Brock for links). The debate in the literature right now seems to be about whether syllables can be described as oscillators. If they can be, then you can start to talk about things like coupling and entrainment between syllable production and the underlying neural oscillations you can measure in speech production. This recent paper in Frontiers in Language Sciences by Fred Cummins is skeptical but only because he thinks the syllable is the wrong place to look, I think; importantly it has links to all the key papers on this topic.We'll get into that literature at some point, but at this point I'm still trying to come to grips with this analysis and whether this literature fits it and might help us.
If anyone has any bright ideas, questions, comments, papers that might help/hinder this analysis, anything at all, let us know in the comments. This is all a work in progress!
References
Cummins, F. (2012). Oscillators and Syllables: A Cautionary Note Frontiers in Psychology, 3 DOI: 10.3389/fpsyg.2012.00364
Hartley, T. (2002) Syllabic phase: a bottom-up representation of the temporal structure of speech. Progress in Neural Processing 14 ,World Scientific: Singapore, 277-288. Download
Turvey, M. T., Shaw, R. E., Reed, E. S., Mace W. M. (1981). Ecological laws of perceiving and acting: In reply to Fodor and Pylyshyn (1981) Cognition, 9 (3), 237-304 DOI: 10.1016/0010-0277(81)90002-0 Download
The Justice Algorithm
1 day ago
Your work on language, I think, would probably greatly benefit from reading some of Mark Bickhard’s work on language. It was a problem he spent many years working on once he developed his interactive model of representation (I know you don’t like that word, but it is a notion of representation radically different from the usual correspondence or “encodingist” models and is entirely consistent with radical embodied cognition). His theorizing overlaps quite a bit with what you have been discussing on this blog with respect to language. As you say, we have access to information, but we can’t know what it is information with (that is, we don’t know what caused it). When we engage in interactions with information that successfully satisfy the conditions for engaging in other interactions, then the interactions themselves become meaningful as indicators for further interaction*. Taking the outfielder as an example, a particular visual scan indicates that catching is afforded by moving backwards and to the left and holding up his glove. In turn, catching the ball indicates that throwing is afforded, etc. The outfielder’s actions are directed toward transforming the situation, and the relative success or failure of his actions change how it is that he characterizes the situation (a characterization that is constituted as a web of conditional interactions). If he catches the ball, then the batter is out. If he drops it then he has to pick it up and throw it to prevent the batter from getting on base. Etc. So, his actions don’t just recharacterize the situation for him, but for everyone else on field through their own webs of conditional interactions. For instance, for the first-baseman, standing in position to catch the ball is conditional on seeing the outfielder drop the ball, but not on seeing the outfielder catch the ball. Speaking is no different. Assuming that the people you are speaking to know how to use the language you are using (via situation conventions), you engage in speech acts intended to change how others characterize the situation (that is, you attempt to manipulate their perceptions of the affordances available and thus what actions are appropriate for the situation). For instance, the types of successful responses available when I say “I love you” and “I hate you” are entirely different (although there could be overlap—for instance, both might indicate that running away is the right thing to do!).
ReplyDeleteAnyway, you might find the references below useful. The article is the shortest and probably easiest to read, but the 1980 book is where the model was most thoroughly explicated.
*This is, basically the model for representation. Interactions are what constitute representation when they function as indicators for potential further interaction. It’s consistent with radical embodied cognition because the kind of representation available to an organism depends on the kinds of actions it can engage in, which depends on the organism’s body (which includes the organism’s nervous system). This kind of representation is “about” the environment in the sense that it (that is, the interaction) is presupposed to be appropriate for the environment. Environments (and situations) are implicitly defined, rather than explicitly defined, in terms of potential interactions (and the webs of conditional interactions implied by them).
Bickhard, M.H. (2004) The social ontology of persons. Carpendale, Jeremy I. M. (Ed); Muller, Ulrich (Ed), (2004). Social interaction and the development of knowledge., (pp. 111-132). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers (PDF: http://www.lehigh.edu/~mhb0/SocOntPersons.pdf)
Bickhard, M.H. (1980) Cognition, communication, and convention. Praeger
Campbell, R.J. (2011) The concept of truth. Palgrave Macmillan
Adrian,
ReplyDeleteThanks for the interesting comment! I suspect Andrew and Sabrina will be even more hardline than you think. While the the representations you describe do sound like they could be compatible with some forms of embodied cognition, the ecological psychologists are about as anti-representation as you can get. I, for one, am not sure there is any value in talking about mental representations, though I am fine with the notion of physical representations - as when a picture of Obama re-presents what Obama looks like. Some eco-psych people even get twitchy about that!
Eric
Well, an argument can be made (and is in Bickhard and Richie’s book on Gibson—here’s a pdf if you are interested: http://www.lehigh.edu/~mhb0/BickhardRichieRepresentations.pdf) that the problem eco-psych has with representation is with correspondence (encodingist) models of representation, not with representation, per se. That is, the argument Gibson and others have launched against representation is an argument against correspondence (encodingist) models of representation. The arguments are not, however, arguments against an action based model of representation, such as the Bickhard’s interactivist model. To the contrary, Gibson’s model of perception is an interactive model of perception, and interactivism can be thought of as an extension of it to other areas of cognition. As I said before, it is actions (really, interactions) that are representational in that if successful, they indicate the possibility of other interactions. Seeing a blurb cross its visual field indicates the opportunity to flick its tongue and eat for a frog, but it does so functionally, not by creating some kind of structurally isomorphic stand-in for the blurb, as a correspondence model would suggest. The visual interaction itself is indicative that tongue flicking is appropriate (which, unlike with correspondence models, does not require that the indication is correct—the blurb could be a stone flicked by a scientist and not something nutritious like a fly). What is being represented (that is re-presented) is an action (or an interaction), not a set of correspondences to something in the environment. The aboutness, of course, is something in the environment, but the aboutness arises as a function of the action’s appropriateness for the environment rather than being some kind of picture of or correspondence with or stand-in for it. As Richard Campbell argued, you can call that something other than representation, but why? Why not just argue that correspondence models are wrong?
ReplyDeleteAnyway, the point was not to argue in favor of interactive representation, but to suggest that Bickhard’s model of language is massively congruous with Andrew and Sabrina’s project, and it would be highly useful for them to check it out. I only mentioned representation because I know they are allergic to the word “representation” and might discount his work on the grounds that he talks about it. However, his work is entirely consistent with radical embodied cognition and ecological psychology, and he covers much of the same ground (conventions, for instance), so it would be unfortunate to dismiss it on such unwarranted grounds. That’s all.
Adrian:
ReplyDeleteFirst, thanks for the info - pointers like this is exactly what I was fishing for here.
Second, I like the sound of Bickhard's approach. We are inclined to say 'why bother calling that representation'; you correctly identify that our beef is with correspondence models but as far as we're concerned those models are just what representation means in psychology. The word therefore has baggage we don't want, so in order to "engage in speech acts intended to change how others characterize the situation" we think using that word is more trouble than it's worth :)
All that said, thanks for highlighting the meat of this work because it sounds like we should hold our nose on this use of the word and get into the content - thanks!
Hi, I was just looking through my files and found another couple references that might be more useful than what I mentioned above, as they are specifically about language. (Note that both refs have download URLs).
DeleteBickhard, M. H. (2007). Language as an Interaction System. New Ideas in Psychology, 25(2), 171-187. (URL: http://www.lehigh.edu/~mhb0/BickLangInterSysNIP.pdf)
Bickhard, M. H. (1995). Intrinsic Constraints on Language: Grammar and Hermeneutics. Journal of Pragmatics, 23, 541-554. (URL: http://www.lehigh.edu/~mhb0/IntrinCon.pdf
I have two broad, very open questions:
ReplyDelete1) How would you compare gesture to spoken language in this account? That is, presumably the dynamics of a gestural event reflect the dynamics of an underlying neural event, and yet it would feel to me as though the dynamics of gesture would allow itself to be analysed more easily from an ecological perception perspective. Both are communicative. Both, it has been argued, have grammatical form. Perhaps gesture may provide a ladder to get from action to language. Likewise, sign language and playing a musical instrument...
2) The second question is related to the above: where does the 'social' come into this account? Not to get all Wittegensteinian, but it doesn't seem as though language (or gesture or music) can 'mean' very much at all without con-specific agents and a network of these to pressurise the development and use of some particular neural-articulatory dynamic forms over others. Linguistic events, to me, seem more constrained by broadly 'social' laws (very loosely speaking) than by physical laws that constrain events like balls flying through the air.
I am very interested to hear your thoughts.
Matthew,
DeleteRegarding your first point, there IS a literature in animal behavior trying to tackle communication from an ecological perspective. Most notably in the work of Don Owings and the work of Nicholas Thompson. The question is where the boundary is (if there is one!) between "mere communication" and "language". Then we would need to know which category those gestures fell into.
Hi Eric,
DeleteThanks for the pointer on the animal behaviour literature - I am not up on this so I will look it out. My feeling is that there is probably not an ontological gap between 'mere communication' and 'language', and that the task of explaining language from an ecological perspective may benefit from 'scaling-up' from potentially more tractable cases of communication. I accept though that this may only take one so far. A deictic gesture may function like the utterance, "look at that!" The difficult cases, such as, "The past is a foreign country", I'm not so sure.
1) How would you compare gesture to spoken language in this account?
DeleteI would treat it the same way, as you describe, just replacing 'articulator dynamics' with 'gestural dynamics'. It would produce visual rather than acoustic information, but the goal is always to keep the analysis in the same form.
Now, you rightly point out that I don't have the social aspect in here. This is true, and this is needed to ground the meaning. The meaning of perceptual information is grounded in the physics of the relevant event; the meaning of linguistic information will eventually be grounded in social convention. This analysis only pushes the search for meaning back as far as a trained nervous system, so there's clearly work to do. I was just curious to hear people's reaction to this basic analysis at this point.
The problem is always this: there's nothing in the word 'dog' that makes it get attached to the object dog. The relationship is arbitrary. However, Sabrina's been arguing and I agree that while the initial link may be arbitrary, that link then gets actively maintained by the language community in manner analogous to the way physics actively maintains the link from event to information. That linguistic maintenance is less stable, but then so is language so that's ok. Also, once one link exists it structures related links; words share roots, etc.
So your point stands and is entirely correct, but we think we can just keep pushing the analogy and keep talking about the whole thing in the same terms. That's the goal - keeping language and perception-action as the same kinds of things.
Andrew and Sabrina -
ReplyDeleteI'm all for thinking of processing speech as just a special case of the general task of processing perceptual input. The question then becomes where in the spectrum between psychology and physics one should focus attention. Injecting a concept of "meaning" seems clearly too far toward psychology while looking to the physiological "dynamics" of speech production seems to me too far into the physical details. So, I'd like to list the features of speech processing on which I think we agree, then identify features on which we seem to disagree and reasons why.
1. We seem to agree on a Quinean stimulus-response paradigm for thinking about perception in general and speech processing in particular.
2. I think of stimuli in terms of the resulting patterns in neural activity. Your paragraph contrasting kinematic and dynamic patterns seems a bit confusing (I suspect there's a typo or two), but I gather your "kinematic patterns" roughly correspond to my neural activity patterns. Here, I'll go with your term.
3. We seem to agree that the objective of perceptual processing is to produce responses - which I refer to as "behavioral dispositions" to emphasize that they may be latent rather than immediate.
4. I assume we agree that some responses to stimuli must be learned, either via direct training by family/community or indirectly by trial and error - not that every response must be preprogrammed, only that a priori learning of a (large) set of stimulus- response pairs is required.
5. I assume that extraction of the "information" in a kinematic pattern is successful (it's "meaning is understood") if the pattern is sufficient to support determination of a response. Eg, in the case of a simple pattern it might be sufficiently close (in some sense) to a learned pattern that the behavioral disposition paired with the stored pattern suffices.
It is at this point (probably before) that we diverge with respect to speech processing. I can understand why "projectile dynamics" (as manifest in kinematic patterns) might be useful in analyzing perceptual tasks like the fly ball. If one assumes the "tracking" (as opposed to the "predicting") approach to reaching a point of intersection, then the processing must continue tracking those dynamics/kinematics up to the point where the fielder no longer needs to relocate (ie, to the point at which Adrian's succession of interactive tasks moves on to the task of actually catching the ball). One might call reaching that point "achieving perceptual sufficiency". In those terms, the frog achieves perceptual sufficiency when it's perceptual processing has reached the point where it's tongue flicking is likely to intersect the path of the "blurb". But I contend that speech perception isn't dynamic like tracking but instead is more like predicting or pattern recognition. Perceptual sufficiency can be achieved long before a speech event has been completed (see note below) which seems to make the detailed dynamics of the production process largely irrelevant.
Note: If you doubt this, google "who said [small fragment of your favorite movie line in quotes, ie, entered as a phrase]"; eg, 'who said "what we"', 'who said "make my"', 'who said frankly"'. Some context-dependence seems necessary, so google must assume a movie context for such queries; for us, context is often explicit - eg, "Do you remember which actor said ... ?") We presumably can do at least as well as google's computers.
----- cont'd ---
1. We seem to agree on a Quinean stimulus-response paradigm for thinking about perception in general and speech processing in particular.
DeleteI believe you, but I don't know the details of this approach off the top of my head. Can you summarise?
2. I think of stimuli in terms of the resulting patterns in neural activity. Your paragraph contrasting kinematic and dynamic patterns seems a bit confusing (I suspect there's a typo or two), but I gather your "kinematic patterns" roughly correspond to my neural activity patterns. Here, I'll go with your term.
Yes there were typos, sorry - now fixed.
By kinematics I'm being very literal. A kinematic variable is one which changes over time but cannot include mass in the units. A dynamical system is the thing in the world that produces this pattern, and the description can have mass in the units (although it doesn't have to). At this point, I'm suggesting neural dynamics, coupled to articulator dynamics produce linguistic kinematics which another person can detect in order to perceive the speech event.
Points 3, 4, 5: Yes and yes and yes (Sabrina is getting into point 4 in way more detail for an upcoming post).
But I contend that speech perception isn't dynamic like tracking but instead is more like predicting or pattern recognition. Perceptual sufficiency can be achieved long before a speech event has been completed (see note below) which seems to make the detailed dynamics of the production process largely irrelevant.
I agree with this fact of the matter. Of course, a skilled outfielder doesn't actually have to watch the whole event to know whether the ball is catchable; when I played 1st base in softball I got to the point where I could see very quickly if a fly was heading to the outfield or not. Event structure (both the underlying dynamics and the related kinematics) extend over space and time and the kinematic structure at the beginning can come to be informative about how the rest of the event will unfold, especially when the underlying dynamics are stable enough in the way they unfold (as in projectile motion or, in many cases, language). See this post on EB Holt.
The dynamics of how the event actually unfolds do matter; perceiving what's coming up on the basis of what's happening now is a little risky (less so in perception where the dynamics are more compulsory, but think about garden path sentences in language).
Can you summarise [a Quinean stimulus-response paradigm]?
DeleteAt a high level, it's just a matter of focusing on sensory input itself (eg, kinematics) as opposed to focusing on the source of the input, and taking the objective to be determining responsive actions instead of, for example, constructing representations. Dynamics are, I take it, a feature of the source, but when the dynamics and the kinematics are isomorphic, one can focus on the latter, thereby fitting the stimulus-response paradigm.
At a lower level, the answer is a simple "no". I'm just finishing up Hylton"s "Quine" which has been a tough slog, and I won't really be able to put it all together until I do my mandatory reread (if even then - really poor reading retention).
OK; that sounds like we're in the same ballpark. Just checking :)
Delete--- cont'd ---
ReplyDeleteTo repeat, I enthusiastically support your general direction - thinking of speech processing as just a special case of perceptual processing - but have doubts about your specific path. Here are some problems I see:
The information that is created [by speech] is kinematic patterns in the acoustic array. These patterns are caused by the underlying dynamics of articulation (how the lips, tongue and vocal cords change over time).
We understand artificial speech just fine despite there being no physiological dynamics involved, just computer generated sounds. A problem with the analogy may be that the relevant "dynamics" of successful speech are largely interpersonal, hence psychological. Anomalous monism (rightly or wrongly) suggests that they are therefore not subject to strict physical laws.
the dynamical system producing speech is actually much more than just the articulators. A critical player in speech is the brain
The question is how important such production details are in determining a behavioral disposition in response to a stimulus. In determining the detailed dynamics of a ball in flight, critical factors are the composition and trajectory of the bat, the point and angle of impact on the ball, et al. But does the fielder need to know any of those dynamics in order to move so as to intersect the ball's flight? OTOH, in dealing with a spinning tennis ball such factors are important. But they tend to be gleaned partly from perception of the hitter's stroke and therefore are essentially contextual, or are detected from the ball's trajectory and therefore are part of the kinematic pattern. I suspect that features of the larger "system" that includes the brain are similarly separable from, or imbedded in, the kinematic patterns consequent to heard speech.
[the neural dynamics] are like this because that's how ... an extensively trained nervous system changes it's state over time when it's producing that sentence rather than another.
I agree with this; it's part of an explanation of how we learn to respond to stimuli by speaking. Presumably, we create (notionally) tables of recognizable stimuli and motor muscle commands that can produce a speech event. But I don't see how this enters into the hearer's complementary task of determining an appropriate behavioral disposition. The hearer also has learned such a (notional) table and again (for a simple utterance) just does a table look-up using a combination of kinematic and contextual patterns. [By "notional" I mean to emphasize that I'm not suggesting an actual implementation, just a heuristic aid.]
We understand artificial speech just fine despite there being no physiological dynamics involved, just computer generated sounds.
DeleteThis is the language equivalent of virtual reality: a system that creates the right information using a different underlying dynamic. If you get your information right, the system never notices because information is all we have. Another way to say this is that you have described an equivalent configuration, and Runeson's already demolished this as a problem for ecological psychology :)
I do take you point that the relevant dynamics are going to entail more than articulation or neural activity. As I said to Eric I think my analysis is heading in the right direction but it's incomplete.
The question is how important such production details are in determining a behavioral disposition in response to a stimulus.
Absolutely. What the system actually cares about is always an empirical question. But you have to constrain your search, and you do that by characterising the relevant task dynamics and identifying the full set of kinematic consequences. You then poke the latter one at a time until something breaks.
You can do more than 'glean' things from perception. The dynamical task analysis helps, though: I can say that the outfielder doesn't care about the composition of the bat etc because these factors get 'folded into' the outcome 'release angle and velocity', and I can say this because the dynamics of projectile motion don't include terms for the production of those variables. (How a person produces those variables is an interesting question in it's own right, but it's not part of the task facing the outfielder). I'm agreeing with you, I think, I'm just trying to tidy the language.
But I don't see how this enters into the hearer's complementary task of determining an appropriate behavioral disposition.
Right. This is the remaining problem of 'why do the neural dynamics have the meaning they have'. I don't have an answer for this. Perceptual information means what it means because it's about a physical event and the meaning is compulsory. Language doesn't work like this. However, at a first pass, language is grounded in a culture that uses it in a particular way. So there's not reason why we call dogs 'dogs' but once we do, that arbitrary link becomes sustained and maintained by the language community. You can change that link because it's arbitrary, but there will be resistance to that change suggesting some degree of stability.
The thing I like about thinking this way is that stability, etc, are all dynamical quantities you can measure and understand. So we're back where Sabrina and I want to be, talking about language as the same kind of thing as perception.
So yes, plenty still to do and I agree with your caution! My replies have mostly just been about tightening up the language we're using to make sure we stay on track :)
Perceptual information means what it means because it's about a physical event and the meaning is compulsory. Language doesn't work like this.
DeleteWe seem to be converging. This quote highlights the remaining disconnect. If one adopts a stimulus-response paradigm and takes the purpose of perception - whether of linguistic or non-linguistic events - to be a response, then the conclusion that seems natural to me is that the "meaning" of a perceptual event is the response "intended" by the perceived object, not dynamical (or equivalent kinematic) information about the object. The latter presumably will play a role in structuring the details of the response, but it seems to me that the "meaning" has to be at a higher level (in a sense to be discussed).
In casual conversation, we often implicitly assume that the main purpose of communication is to provide someone with "information". For example, I have a fact that I think it would benefit you to have, and I try to convey it to you. You have understood the "meaning" of the communication if you now "have" (in some sense) that fact at your disposal.
But that's a potentially misleading description of the communication process. In general, the "information" that is conveyed is (notionally) just the value of an index used to select from a list. Until there is prior agreement between sender and receiver on what items comprise the list and which index value goes with which item, there is no "meaning" (semantic content) communicated.
In terms of this (greatly simplified) model, the act of perception extracts kinematic information which is used to access an entry in a (notional) list of innate or learned behavioral dispositions. Of course, in general the "agreement" necessary for determining those entries may be implicit, either innate or learned by trial and error. But in the specific case of language, the agreement is typically (though not always) established by convention and learned via training. Meaning is successfully communicated if the perceiver acts as intended by the source of the stimulus. (Of course, in the general case, such "intent" is metaphorical.)
So, how does the indexed list of actions get created? In the case of a simple "fact", an item may be a disposition to assert the fact, and the "casual conversation" model, though incomplete, is on the right track. We need to specify what is stored where and indexed how when a fact is learned. My hypothesis is that in this simple case, at a minimum the motor neuron commands necessary to assert the fact orally are "saved", indexed by the kinematic information (in the form of neural activity patterns) attendant to the learning process.
The model is discrete whereas perception in general is continuous. We've previously discussed approaches to resolving this, so I'll ignore that issue in this comment.
All very sketchy, but hopefully enough to work with.
Andrew...
ReplyDeleteI'll admit I'm pretty skeptical here. My intuition is that language does not lawfully reflect the brain states in the manner proposed (either because it just doesn't, as in the super-redundancy of the swimming lobster, or because it does, but not the brain states we might care about). My bet is that Eco Psych will make the most progress with language by first doing a solid analysis of the functions of language. I think we need some solid progress on social psych and some solid progress on communication more generally, before "language" proper.
That said, everything you said sounds very intriguing, and I wouldn't be sad to read more as you wrestle with it. In particular, I like the sentence:
"The crazy notion that emerges from this analysis is that linguistic information comes to mean the dynamics of the broader system"
That seem appealing, but I suspect 1) that such a matching is ideographic to the speaker. Presumably, the broader-dynamics revealed by my saying "God save the Queen" are different than the broader-dynamics revealed when you say it. Also, 2) we would need to allow a very multi-level reading of "dynamics of the broader system". For example, a speaker saying "Ow!" is revealing less of their broad-dynamics than a speaker saying "Are their really still voters who are undecided about Obama?!?"
My intuition is that language does not lawfully reflect the brain states in the manner proposed
DeleteI tend to agree, actually; I don't think neural dynamics are the entirety of the system generating the information. So my chain of reasoning above is incomplete, I think.
That said:
(either because it just doesn't, as in the super-redundancy of the swimming lobster,
I actually think degeneracy works in my favour. That work shows that there are many different neural configurations that can produce the same neural dynamics; so you and I can produce the same dynamics underpinning the sentence 'look out for that tiger' even though we don't share a connectome. It's the way the system evolves over time, not the mechanism producing that change, that's key here.
or because it does, but not the brain states we might care about).
This seems at least somewhat unlikely; at some point my language use has to be about conveying something I know to you, and that's going to entail some relevant neural activity. However we want to characterise what the brain is doing, it has to be doing something relevant to the task at hand.
My bet is that Eco Psych will make the most progress with language by first doing a solid analysis of the functions of language.
Sabrina's posts are certainly focused here, and I think this is perfectly good. This post is just another piece of the puzzle, potentially.
Also you're right about the dynamics that the information comes to mean; it's going to be a very broadly distributed system of neurons, culture, context, etc etc. Possibly intractable, but our goal is to just try and talk about it all using the same language as we use in perception/action.
I'd lean in the same direction Eric has on this one - if there are sufficiently reliable relationships of interest they'll be in the broader social context rather than between anything and the brain. What use (as regards action-control) is knowing what someone's brain is up to? And why would they provide that information through speech? It seems language and speech are much more about the coordination of actions between users (this might be the same user at different times) than the coordination of actions and brains.
ReplyDeleteThe Gibsonian anthropologist Tim Ingold puts it best, I think:
"We 'feel' each other's presence in verbal discourse as the craftsman feels, with his tools, the material on which he works; and as with the craftsman's handling of tools, so is our handling of words sensitive to the nuances of our relationships with the felt environment."
His 2000 book might have some useful essays for you, as regards task analyses of language, but will be a long way from the kinds of measurable specifics you'd like:
Ingold, T. (2000). The perception of the environment: essays on livelihood, dwelling and skill. London: Routledge.
What use (as regards action-control) is knowing what someone's brain is up to? And why would they provide that information through speech? It seems language and speech are much more about the coordination of actions between users (this might be the same user at different times) than the coordination of actions and brains.
DeleteBroadly I think this is true. This analysis is currently about a given act of language, so while the final grounding has to be further into the social dynamics, the speaker is using their nervous system that is fully trained in those dynamics to convey something. So at a given moment, it may that the task is to convey information about some brain dynamics. In the bigger picture, the meaning of those dynamics is grounded in other things.
I'm agreeing that this is incomplete, just trying to refine the description of what's going on.
Matthew -
ReplyDeleteA deictic gesture may function like [an] utterance
In the case of a primitive "language" as in W's PI §2, it seems clear that the choice of medium is arbitrary. In comm theory terms, symbols can be modulated onto any available medium: eg, uttering the sounds "slab" or "block" for aural, holding up one or two fingers for visual, tapping someone on either the right or left arm for tactile. By mutual agreement, each symbol will be paired with a behavioral disposition. Eg, in PI §2 the builder and helper have agreed that a symbol indicates the item that the helper is to bring to the builder.
In PI §6, W suggests that the meaning of a "sentence" (equivalently, a symbol) has been understood if the helper responds in accordance with the builder's intent. I have found this an attractive way of thinking about "meaning".
Once you start thinking in those terms, you recognize that even if a much more complex language has a finite vocabulary and a simple enough grammar, all sentences of the language could in principle be encoded as a finite number of symbols, each of which - by mutual agreement - could be paired with a response (eg, a behavioral disposition). Again, the symbols (equivalently, the sentences) could be transmitted via any medium. This seems to argue against taking the meaning of language to be the dynamics of the production of the sentences, ie, the "modulation" of symbols onto media.
Of course, the grammar of a natural language supports an infinite number of sentences, so "table look-up" won't work in general. But since complex sentences are often constructed from simple sentences, the applicability to relatively unsophisticated language speakers doesn't seem obviously implausible. In fact, it's implicit in the phrase "knee-jerk response".
BTW, I obviously find that to "get all Wittgensteinian" is quite helpful in these matters - as well as to get all Davidsonian, Quinean, Sellarsian, et al.
Chris Green, one of the big players in history of psych, has a very nice series of podcasts about prominent people and events (at the bottom of the page here). I am teaching history of psych as a breadth class for the first time, and we are listening to several of them. This past week we listened to John Shook being interviewed regarding Dewey. He does a very good job explaining how one of the key elements of American scene at the time was reversing the S-R formula, to point out that the person produced the movement that created the so-called "sensory input". He also emphasized how that reversal was key (for Dewey) in making the information "meaningful", because it meant that the stimulation was always, at least in part, about what you had done to produce the observed changes.
ReplyDeleteThat same logic is certainly displayed in Gibson's work - perhaps best most clearly in how he points out that the view-from-here-at-this-moment tells me at least as much about myself as about the world. That is, seeing the front of the TV as a top-heavy trapezoid tells me that I am in front of the TV, looking up at it, but that doesn't tell me much about the TV itself.
It occurred to me while listening that this is one of the things I think is missing from the TSM treatment of meaning.
---------------
P.S. Shook is a co-editor of the incipient Neuro-pragmatism book. I sent out another feeler recently as to the books progress and will get back to you soon. Apparently, hold out for Oxford is a good way to delay publication.
P.P.S. The best part of the podcast is definitely the parallels drawn between the philosophical, psychological, and sociological implications of this way of thinking, which ties together the otherwise disparate seeming aspects of Dewey's work. It is all about how actors create the responses of the world.