Pages

Thursday, 28 July 2011

A selection of problems with representation

As Andrew alluded to in the comments to his last post, I'm cooking up some replies to various questions that have been raised. Being relatively time poor at the moment, I'm going to take these in small bits, rather than attempt a more comprehensive reply. The first thing I want to do is consider a few of the major issues, as I see them, with the notion of representations. As a cognitive psychologist, I spent many years endorsing a representational framework, but over time I have encountered enough problems/limitations with this approach that I consider the idea of representations to be seriously flawed. As a result, I've spent the past couple of years figuring out how to do cognitive psychology without them (but that's another post). What I want to do here are go through some of the things that have convinced me to abandon representations.

1) We don't know what we're talking about when we talk about representations

This criticism goes beyond the fact that there are multiple, competing definitions of representation. It goes beyond the fact that most cognitive psychologists are never asked to seriously wrestle with their own definition of representation, to consider its historical origins, or to consider whether there are alternative approaches to cognition. This criticism is more basic and is summed up nicely by Larry Barsalou: "We have no accounts of how propositional representations arise in the cognitive system, either innately or through experience. We haven't the faintest idea of how biological mechanisms could produce abstract propositions" (Barsalou, 1993, p. 173). Regarding this quote, Linda Smith says "When we cannot imagine how our basic ideas about cognition can possibly be realized, we ought to consider the possibility that they are wrong" (Smith & Jones, 1993, p. 181).


Granted, the field has moved on since the early '90s, but we still don't know much about the relationship between biology and cognition. And, I'm not talking about neuroimaging - interpretations of imaging data are ambiguous enough that they seem to reflect rather than challenge and inform our current understanding. I'm talking about how complex learning and memory are instantiated via networks of cells in a chemical bath.

2) We objectified the problem


Representations were initially invoked as a way to explain why some behavioural responses couldn't be completely predicted on the basis of the stimuli. Based on observations of the environment and our behaviour, it looks like something internal mediates our actions. Andrew has already discussed problems with the poverty of the stimulus argument, so I won't get into that aspect of the issue - there are other examples that are still potentially relevant even with a better understanding of perception/action. For instance, I might perform better on a math test when I am happy rather than sad. Or, I might take longer to respond to a question you ask if I am distracted by other thoughts than if I give you my full attention. Put simply, it is not the case that given a particular external environment I will always respond the same way (there's a good argument to make that objectively characterising an external environment doesn't make any sense because what matters is what goes into the system, not what exists out in the world, but that's a point for another day).

Early cognitive psychologists flagged up this problem and decided to spend some time figuring out what could be happening internally to account for our behaviour. This is a good idea. But, notice that this problem implies nothing at all about the form that internal processes/states/systems/whatever should take. It does not imply that the only way to account for behaviour is via internal discrete, computational representations. And yet, this is the dominant form of representationalism today (you might not know you're a discrete, computational representationalist, but chances are that if you believe in representations that you are one).

So, what do I mean that we've "objectified" the problem? I mean that cognitive psychologists chose to address the problem of internal mediating states with a noun - representation. Nouns are for doing things to. Acting upon. Messing with in some way. If our heads are full of things, of representations, then we must do things to them to get any use from them - you need processes to act on the things. You read a book, file a piece of paper, find a picture of a loved one. By objectifying representations, we treat them like objects to be created, filed, stored, sorted, retrieved, etc. These types of verbs populate much of cognitive psychology. But, creating, filing, storing, sorting, etc are only sensible things to do to discrete objects. If we do not have discrete representational objects in our head then many of the questions asked by modern cognitive psychologists need to be reformulated.

3) Discrete, stable concepts don't explain the most interesting and adaptive aspects of cognition


The traditional cognitive view is that we have object concepts (e.g., "dog") that contain relatively stable information about what we know about things. When these concepts are referred to in various contexts ("The dog chases the cat"; "The big, red dog") we think of basically the same thing. Of course, we're not actually thinking exactly the same thing - the stuff about dogs that comes to mind when you hear "The dog chases the cat" is not the same as the stuff that comes to mind when you hear "The big, red dog".

According to the traditional view, additional processes can modify the information about concepts that we have access to at any given time. Certain information is made "more salient" or is "weighted more heavily" based on goals or tasks. The idea is that we have core concepts, but might think of different aspects of them depending on context. Over time, the content of core concepts can be modified through learning and experience, but this modification is like editing a document - the document (representation) continues to exist, although some content may change over time.

Logically, contextual variability cannot be caused by stable conceptual representations. The variability must be caused by additional processes acting on the representation. As Smith and Jones (1993) note, this breaks cognition into "structure (stability) and process (variability)" making the task of cognitive psychology to figure out what stays the same across "different instances of a single cognitive act" (p. 182). Gelman and Medin (1993) point out that with this partition, the same set of data can be explained in multiple ways - simple representation / complex process, complex representation / simple process. This isn't good. But Smith and Jones (1993) additionally argue that stable concepts can't explain what's "smart" about cognition.

According to the traditional view, cognition is smart because it can represent concepts that reflect abstract and general information, which is most likely to be stable over repeated instances. Smith and Jones argue for a different type of smart cognition that prioritises flexibility and "fit" with a specific situation over stability. In the context of novel word interpretation, they say:

"If there is an abstract represented structure of some kind that sits behind the shape bias [children's tendency to generalise novel object terms according to object shape], it is the least interesting, least intelligent part, of the child's word-learning behavior. All the work that makes novel word interpretations smart is done by those processes that involve the specific objects and specific words at hand. It is these real-time, real-task processes that flexibly adjust attention to find the most likely referent of a specific utterance of some unknown word. Novel word interpretation is not smart because it is stable; novel word interpretation is smart because it is creatively adaptive. And for this kind of intelligence, perception--information about the here-and-now--always matters" (p. 184).  

The nub of their argument is this: If representations are responsible for stability in cognition, but the really useful bits of cognition involve context-specificity and flexibility, then why are representations considered to be so important? This argument isn't a slam dunk against the notion of representations, but it does seriously question the focus of much of cognitive psychology. And, Smith and Jones don't deny that there is stability to cognition, but they do argue against the need for computational representations to account for this stability. Considering the enormous context-sensitivity of all purported cognitive processes, stable, discrete representations don't actually appear to be a very good way of modelling cognition.

So, what else could it be?

4) Representations were invoked to solve a particular problem (see point 2). But other solutions could have been proposed that solve the problem as well and probably better. Unfortunately, these solutions were never really explored and even now are only considered by a small minority of cognitive psychologists.

I've blogged in detail about this issue here, but here's a summary (based on Van Gelder, 1995).

The analogy: A major 18th century engineering problem was reconciling the oscillation of pistons with the rotation of flywheels. Driving a flywheel lets you generate rotative motion, rather than just pumping motion that results directly from piston. In other words, figuring out how to power a flywheel with pistons lets you power a wide range of machines. The trick is getting the flywheel to turn with uniform speed. Flywheels vary in response to the current steam pressure and to the overall engine workload. And, both of these factors, themselves, are variable. A throttle valve allows one to change the pressure of the steam, and therefore control the speed of the flywheel. But, this valve has to be adjusted by just the right amount at just the right time to keep the speed uniform.

The computational solution

One solution to controlling the valve requires something or someone to measure the state of the system at various points in time and adjust the valve by a certain amount in response to those measurements. If the steam pressure is x and the workload is y, then adjust the valve by z. The first characteristic of this type of solution is that it proceeds in stages. The first stage takes measurements. The second stage applies a rule based on those measurements. Because this solution relies on two stages, there is necessarily some time lag between measurement and correction. Depending on the duration of this lag, the correction might be inappropriate for the current state of the system. This type of solution necessitates an executive – someone or something to take account of the state of the system (e.g., “if x”) and then to carry out the appropriate action (e.g., “then y). It also necessitates measuring the difference between things. For instance, the only reason to adjust the valve is if the current speed differs from the speed a second ago.

This is the type of solution implemented in most cognitive models.

The dynamic solution

There is another, radically different way to solve the valve control problem - you can couple the opening of the valve to something that necessarily varies in response to steam pressure and workload in a way that results in constant flywheel speed. By ‘necessarily’ I mean that the physical properties of this thing respond to changes in steam pressure and workload in a particular way. Such a solution responds in one step and does not require measurement. Thus, there is no time lag problem or concomitant source of error. Nor does it require an executive. Hitching the valve control to the flywheel couples the thing you want to control (the valve opening) directly to the thing that embodies the relevant sources of variance (the flywheel speed). This beautiful solution is implemented in the Watts centrifugal governor, an 18th century piece of technology that still works brilliantly.

Consequences for cognition

The computational solution relies on discrete computational symbolic representations. It has to measure and represent flywheel speed, steam pressure, and workload using abstract symbols. Then, it has to apply operations to these symbols in order to calculate how to adjust the valve. This output is a representation that causes the appropriate adjustment to be made by an executive (computer or human worker). The second solution, the one that describes the actual centrifugal governor, is nonrepresentational.  There are no discrete inputs and outputs, no computations performed on inputs and outputs, and no executive in charge.

While cognitive psychologists are often happy to admit that dynamical systems do a good job describing some systems like the centrifugal governor, they are hesitant to admit that dynamics might also characterise complex cognitive behaviour. So, here's a cognitive example:

According to prospect theory we compute the utility of various outcomes of a decision and select the one with the highest utility (Kahneman & Tversky, 1979). This theory clearly depends on discrete representations (i.e., of each option’s utility) and computation (i.e., calculating which option has the largest utility value). But, it is equally possible to describe decision making in terms of state space evolution in a dynamical system. For example, motivational oscillatory theory (MOT; cf. Townsend) describes oscillations resulting from satiation of persisting desires. We approach food when we’re hungry, but not when we’ve just eaten and are temporarily satiated. It’s possible to interpret this behaviour as a decision – when I’m hungry, I decide to eat. But, in Townsend's model there are no discrete states and no algorithmic processes effecting transformation on these states. There is just the evolution of the system over time. See Busmeyer and Townsend (1993) for a fully fleshed out dynamical model of decision making.

Whatever you might think of individual models, it is clear that dynamical systems are a legitimate alternative to the traditional computational approach to modelling cognition. It is also clear that these two approaches lead to very different assumptions about the nature of the underlying cognitive system. The computational approach leads us to think about mental objects (representations) that are manipulated and transformed via cognitive processes. The dynamic approach leads us to think about components in a cognitive system evolving over time and in response to the current context.

Summary

In my experience it's a waste of time trying to convince someone who endorses representational cognition that there are no representations. For one thing, representation is such a broad and ambiguous term that it is unclear what type of evidence could exist that unambiguously rules out representations. Sometimes people just want to keep using the term, so they'll apply it to aspects of dynamical systems (e.g., the location in state space of the MOT model "represents" hunger or satiety). Using representations in this manner doesn't contribute anything to the understanding of the system, and if representations aren't doing any work, then I can't see the point of continuing to invoke them. But this is besides the point. I really wouldn't expect a representationalist to be convinced by anything I've said.

So, what is the point of this post? The point is to demonstrate that representations were originally invoked to solve a particular problem. We currently have no mechanism to explain how representations could be instantiated in a messy biological system, so the representational framework does not have any special biological plausibility going for it. There are also alternative approaches to the problem of modelling mediating states (e.g., dynamical systems) that have several advantages over traditional computational models.

Given these points, I think that cognitive psychology would be vastly improved if cognitive psychologists 1) consider how they came to think that cognition was based on representations. Was it because they were taught from the beginning that cognition was representational? Or was it because they seriously investigated alternative approaches? Did they rule out these alternatives on the basis of data that favoured a representational approach? Or, did they rule them out on the basis of simply not being able to imagine cognition working without representations?

After asking myself these questions, I realised that I had unquestioningly adopted a representational framework because that is what I had been taught and that I had never actually considered alternative approaches. That didn't sit very well with me, and after spending a few years doing my homework I was convinced by the evidence that representations were not the solution to cognition.



Barsalou, L.W. (1993). Challenging assumptions abouy concepts. Cognitive Development, 8, 169-
180.


Busemeyer, J. R. & Townsend, J. T. (1993). Decision field theory: cognitive approach to decision making in an uncertain environment. Psychological Review, 100 (3), 432-459.

Gelman, S.A., & Me,din, D.L. (1993). What's so essential about essentialism? A different perspecfive
on the interaction of perception, language, and conceptual knowledge. Cognitive Development.
8. 157-167.

Kahneman, D. & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291.

Smith, L. B. & Jones, S. S. (1993). Cognition without concepts. Cognitive Development, 8, 181-188.

Van Gelder, T. (1995). What might cognition be, if not computation. The Journal of Philosophy, 92 (7), 345-381.

14 comments:

  1. The issue of the 'objectification' that you raised reminded me of Tony's historical analysis of where modern representational and ecological psychologies came from. Representational psychology is indeed firmly structuralist, exactly as you describe here.

    Also I love that Smith & Jones quote about the shape bias :) I'd forgotten about all that work, it's cool.

    ReplyDelete
  2. Nice nice!

    Psychologists, as a group, are terrible at distinguishing description from explanation. As you point out, 'representation' started out as an explanation – a hypothesis about why certain observable things happened. When we lost track of that, and allowed 'representation' to become part of the description, all sorts of problems started. Once you screw up description and explanation, you can't keep track of philosophical assumptions, nor honestly judge their value.

    This also reminds me of one of Skinner's most insightful arguments about hypothetical constructs: The next generation of researchers tends to forget they are hypothetical.

    ReplyDelete
  3. Finally found a few minutes to think about this properly and put some comments together. And we really should get together in the IPS staff room to chat about this stuff - I'm really enjoying this discussion generally. I think we do have common ground, but from quite different perspectives.

    I would argue that we do have mechanisms by which we can create representations - and they are associative mechanisms. Work on configural learning provides good models of how a small number of features can be associated to form a number of different levels of representation, depending on the number of conjunctions you allow.

    There are good data about how learning can be instantiated in a chemical bath. LTP allows the formation of associations along hebbian rules. There is good evidence that the mechanisms involved in LTP are important in learning. I'll grant you that complex things such as a sense of self are not modelled through such things, but I'm concerned with simpler processes, and i believe that the basic associative processes that we do understand, can produce much of the complex behaviour we observe (although I'm sceptical of many of the cognitive models put forward - they lack parsimony).

    I enjoyed your description of representations - as things to be acted on etc. & I take your point. I think that associative learning theorists don't quite see representations as you have described. We're not cognitivists. To me, a representation is the internal event that occurs when an external event occurs in the world. That representation will have a level of salience, and a level of associability. I think Wagner's SOP model is a good one for how we tend to think about representations. A representation is a bunch of elements, which may be in different states according to what has happened recently, and previous learning. The activation of a representation may lead to behaviour - unconditioned responses or conditioned responses. Without a representation of the event in the real world, there will be no unconditioned response. And the strength of the unconditioned response depends on the salience of the representation. And that salience is changeable with experience. And by generalisation, discrimination, and feedback, associative learning can do all of that novel word interpretation stuff. (IMHO)

    So - I absolutely agree with you that discrete computational representations are not where we should be at. But I think that there are some real-time (is that the same as dynamic?) models such as Wagner's SOP that use representations, that do a good job. I can see your point that 'representation' is a loaded, overused, misunderstood term - but is there a better shortcut for 'internal way of coding what is out there in the world'? and i'm cautious about throwing the baby away with the bathwater.

    I'm going to go and read Busmeyer and Townsend, it sounds interesting.

    ReplyDelete
  4. Hi Pam,

    I'd be keen to have more discussions about current research on associative mechanisms (my knowledge in this area is fairly out of date!) or about any of this really!

    Two comments about associative mechanisms.

    One: The ecological approach can presumably inform learning models by providing both an accurate task analysis and a description of the type of information the system must be using to perform the task. This could ensure that the model is making correct assumptions about precisely what information needs to be learned and it would provide researchers with a clear basis upon which to characterise when and for how long this information is available during training.

    Two: An ecological approach can also constrain research on the formation of associations on a neural level. Again, this would be accomplished by indicating precisely what needs to be learned (and, equally importantly, what doesn't need to be learned) in the form of perceptual information. I am not familiar with this literature either, but I would worry about the possibility of sensibly interpreting changes at a neural level in terms of higher order behaviours. For instance, showing that a particular association between neurons is strengthened does not actually reveal what role that connection might play in learning. Many people make the assumption that these things are decompositional - that you can manipulate one particular aspect of a task, for instance to isolate one component of learning. However, this assumption is often violated due to an incorrect assessment of the task. Again, I'm just speculating and I would certainly like to learn more about this research.

    ReplyDelete
  5. Finally, two comments about representations.

    One: You say "a representation is the internal event that occurs when an external event occurs in the world." Although you say that your working conception of representations is that they are not discrete and computational, the language in this definition "the...event" suggests a discrete occurrence. It might be that you see this event as highly dependent on context, so that the event that occurs when I hear "cat" in one situation is not the same as the event that occurs when I hear "cat" in another situation. But, if this is the case, then this definition undermines what the vast majority of people mean when they talk about representations - namely, that we have stable concepts. Without this core meaning, it is unclear why it is worth holding onto the notion of representation.

    I would ask whether using the word representation was doing any work for me. Does postulating representations constrain the types of predictions you make? Does it rule out some types of explanations for behaviour while allowing others? If not, then representations don't appear to make a contribution to your science. They might be a word that people in the field use out of habit, but it's interesting to think that the field would probably not change at all (except in wordiness) if this term were taken off the table.

    Two: You agree that there are some problems with how the term representation is used, but caution against throwing the baby out with the bathwater. You also wonder "Is there a better shortcut for 'internal way of coding what is out there in the world?' I agree that it is useful to have a technical term handy to capture a complex idea and word-counts would certainly skyrocket in cognitive journals without the term representation.

    I would disagree, however, that we need a word meaning "an internal way of coding what is out there in the world". My objection is to the idea that we need something to "code for" the external world. What makes this necessary? The point that Andrew makes in his last two posts is that we need to figure out the right job description for the brain. The traditional cognitive job description is that we need internal stand-ins for the external world. This goes right to the heart of Andrew's comments about poverty of the stimulus. The insight of the ecological approach is that the environment is not impoverished. This means that we don't need internal models of the world to guide behaviour. If we don't need to code for the external world, then what is it that our brains are up to?

    The phrasing I prefer is that there are "internal mediating states." This language doesn't imply that there is anything coding for or standing-in for the external world. It simply implies that internal states have consequences on behaviour. The metaphor is the Watts governor. There is no component of this machine system that codes for or stands-in for flywheel speed. However, the correct flywheel speed emerges as a consequence of the internal dynamics of the system.

    ReplyDelete
  6. I agree that the poverty of stimulus argument is a weak one - I actually think the opposite may be closer to the truth. The world is so complex that I need to reduce it, and only pay attention to the bits of it that are useful. The 'internal representation' is a sketch of the useful bits of the world, ignoring irrelevant stimuli. That, I think, is how we tend to think of representations in learning theory. The representations themselves are impoverished. But that could of course emerge from the external world via some filtering procedure, aka the Watts governor.
    The difficulty comes about in memory then? So a crucial line of argument in modern learning theory is that the presentation of a real stimulus in the world evokes the activation of an internal representation of an associated stimulus.
    If I'm a rat, who has undergone training where a tone is paired with food pellet, when the tone is presented, the representation of food pellet is activated, and that provokes the CR. We lazily say the rat is 'thinking about' the food.
    There are obviously alternative S-R accounts - but there are effects, such as the impact of sensory specific satiety, that lead to us to conclude that the identity of the food pellets is included in the association in some way (i've avoided using the word encoded!) and can impact upon behaviour.

    Does 'cat' mean the same in different contexts? That is a question that we ask with learning theory, not that we answer. There isn't any empirical reason why it should always be the same. Work by Jo Haddon and Simon Killcross has set up 'stroop-like' task in rats, and animals are sensitive to the different 'meanings' of a stimulus, according to the context (although drugs & lesions can muck about with this). Their account for this is that PFC is involved in selecting which rules are to be used at that point - the context is controlling which associations are used to govern behaviour, a little like occasion setting I guess.

    ReplyDelete
  7. Hi Pam,

    I think the following excerpt from your reply is a great example of the kind of effect non-representationalists like myself should be trying to explain:

    "So a crucial line of argument in modern learning theory is that the presentation of a real stimulus in the world evokes the activation of an internal representation of an associated stimulus." - and this provokes the CR. "[T]here are effects, such as the impact of sensory specific satiety, that lead to us to conclude that the identity of the food pellets is included in the association in some way."

    Your reply focused on associative learning. I've actually spent a lot of time thinking about an ecological explanation for conditioned learning, so I want to sketch out some of those ideas. I want to write a proper post about this at some point, but in the meantime, forgive me for rambling on about this for a minute.

    Gibson spends a lot of time talking about detecting perceptual information. The information that Gibson refers to has structure by virtue of its relationship to the world. Light refracts in a particular way because it is physically interacting with matter in the environment. This means that, in a very direct sense, structure in light is ABOUT structure in the environment. It has some inherent meaning.

    Learning (and memory) comes into play in two places.

    First, we have to learn to efficiently detect the best type of perceptual information for performing a given task. E J Gibson talks about the education of attention and I think this is very similar to your comment that we only pay attention to the bits of the world that are useful.

    Second, we have to learn what the perceptual information MEANS. By perceiving the perceptual information that specifies, say, grip-ability, I learn that some values of that perceptual variable mean that I can pick something up and other values mean that I cannot pick something up. What was important for Gibson is that access to this meaning is direct (after learning). An analogy for this kind of direct perception would be colour perception. When I see light wavelengths of 450-475nm, I see the colour blue - I don't need to infer that those wavelengths indicate the colour blue and I don't need to form an association between those wavelengths and the colour blue - I see blue directly (I know there are some important neural differences here, so again, this is just an analogy). So, for Gibson, detecting an information variable for grip-ability in a certain interval MEANS gripable without requiring inference or association between the information and its meaning. Gibson argues that the crucial thing about direct access to meaning is the specification relationship between the information and the environment (e.g., light refracts in a particular way because it is physically interacting with matter in the environment).

    ReplyDelete
  8. There are potential parallels between perceptual learning and conditioned learning. The task that the rat is performing is often to press a lever. To succeed at this task, the rat must have learned to detect, at some point, all the relevant information variables that let him carry out the action of pressing a lever. At one level of meaning, the relevant perceptual information MEANS press-ability (this is often where ecological accounts stop). At another level of meaning, depending on training, that perceptual information also comes to mean food. As you point out, the traditional learning perspective is that this meaning is learned by associating a behaviour (lever pressing) with the representation of an outcome (food). If we tried to concoct an ecological explanation, on the other hand, we might say that the behaviour has come to have meaning directly (that lever press now means food pellets later) without being associated with a separate representation.

    The tricky thing for ecological explanations is that there is no specification relationship here between the perceptual information for the lever pressing and its meaning. The perceptual information could just as easily have been paired with no food or with something unpleasant. Thus, there are multiple possible mappings between the perceptual information the rat is using to press the lever and the meaning it acquires (e.g., food).

    Here is where I think that Gibson mis-characterised the importance of specification relationships. As a perceiver, I have no way of knowing whether the information I detect is causally linked to the physical world. All I know is whether using that information helps me perform whatever task I'm up to. The fact that the information is causally linked to the environment is important though. It's important because it means that the perceptual information will co-vary perfectly with changes in the environment and this is the only way we can perform really complex tasks like walking. But, as a perceiving, acting, learning organism, I don't care about that. I care about what works. So, I'm perfectly happy to think that meaning can also be based on non-specifying relationships (like between the lever press and food) so long as you're not trying to perform some life or death task - no non-specifying variables for rock climbing please!

    Of course, the million dollar question is what in the world direct access to meaning looks like in a neural sense. And, for that matter, what makes this explanation so different from one where the behaviour is "associated with the representation" of a reward? I suppose the critical difference is still about discrete representations. In the traditional account there is some discrete sense in which a stimulus can be represented (even if using a fairly permissive construal of representation). Meaning arises because one thing (a tone or a lever press) reminds you of another (food). In the pseudo-ecological explanation I've described, there is just the perceptual information and what it means. There is no separate representation of information and food. There is just a system engaged with an information variable that has come to mean "food".

    The thing I can't figure out is what type of experiments could tease apart these two accounts?

    ReplyDelete
    Replies
    1. I came across your blog today after cracking open a cognitive psychology book which mentioned embodied cognition as an alternative. Gotta say, I was engrossed and read ALOT of your posts, as I didn't know there were alternative schools of thought about cognition out there. While I find the ecological embodied approach intriguing, I find it hard to reconcile it with work that is done in the lab I am currently a part of. We study content representation using fMRI and multi voxel pattern analysis which allows machine learning to (in a sense) decode neural activity that represents a category of stimuli (face, scene, object) with a high degree of accuracy. I know you're not a fan of "representation" cognition (or cog neuro) but, we have shown that associative relationships allow novel inference through reactivating representations. In other words, if A is associated with B, and C is associated with B, when you test for the A-C association (the indirect association through the overlapping B), you see reactivation of C's category when participants are making a decision about what stimuli is indirectly associated with A. This tells me that some sort of representations (at least of categories) are stored and accessed for novel use. HOWEVER, I would like to see how the ecological approach would see these results. Maybe they're even irrelevant since the study itself is not very ecological, but I'm just curious.

      Here is a link to the paper if you'd like to take a look: http://clm.utexas.edu/CLMsite/wp-content/uploads/Neuron-2012_-751.pdf

      Delete
  9. One thing to add that may be of use: If the idea of directly perceiving meaningful things seems a bit weird, a concrete example may help: this post is about how we perceive distance in action scaled units, ie directly meaningful to the control of action.

    ReplyDelete
  10. Maybe something like pavlovian-to-instrumental transfer is relevant.

    animals are trained on pavlovian task:

    CS1 - food1; cs2 - food2; cs3 - food 3

    they are simultaneously trained on instrumental task

    lever1 - food1; lever2 - food2.

    In extinction test, presenting cs1 increases the responding on lever1; presenting cs2 increases responding on lever 2. presenting cs3 increases responding on all levers.

    this is assumed to be because presenting cs1 activates representation on food1. food1 has association with lever1, leading to the responding.
    There are overlaps between representations of food1, food2 and food3, such that presenting food3 leads the partial activation of both lever1 and lever2, leading to responding.

    without some form of representation, can this be accounted for? you would probably end up saying that cs1 'means' food1, and food1 means lever1, etc. which basically is exactly the same as i've outlined above, but substituting 'means' for 'is associated with'.

    THere are many examples of this sort of 'associative chain' in learning. Eg
    cs1 - cs2 (no motivationally significant stimuli present)
    cs2 - us (eg licl injection, illness)
    cs1? rejected in the case of licl injection.

    ReplyDelete
  11. I just had to post a comment, although I realize that it's been one and a half year since this was published. I'm a student, and I recently started studying cognitive neuroscience after having done linguistics for a while. I have been reading some of your blog posts with great interest, and the stuff about representations really got me thinking. The field of language and the brain, which consitutes my main interest, seems in fact to rest on the existence of representations. And now, after reading this, I am asking myself how it is possible to learn and use language if it is not represented in our brains in some way? It is hard to imagine how a word could be represented by a cluster of cells, some electric pulses and some chemicals, and I have indeed often wondered about this. But I can't seem to find a better explanation. I think that we never will understand our brain one hundred percent, since that would mean that the brain would understand itself. But at the same time it could of course be the case that a neurolinguist should look further than the brain to find out how language is learned and used by humans. I'd love to know if you know of anyone who does psycholinguistic research in a non-representational framework, or if you could point me to something that I could read about the topic. I thank you for this eye-opener, and I certainly want to keep reading the blog!

    ReplyDelete
    Replies
    1. Hedi,
      Don't hesitate to comment on old posts. I'm sure Andrew and Sabrina get emails when posts appear, and some of the RSS feeders let others see new comments as they come in.

      I'll say that you are spot on:
      "I am asking myself how it is possible to learn and use language if it is not represented in our brains in some way?"

      That is exactly the challenge! The representational approach makes the front-end of understanding language very easy... but it falls apart pretty quickly if you dig deeper. How indeed do a cluster of cells "represent"?!?

      On the other hand, an ecological approach is very difficult up front (in part, because it has not been done before), but the deep part will flow naturally from a broader natural-science understanding of perception and action.

      Delete
    2. Hi Heidi

      Yes, we do keep an eye on all comments, so ask away on anything you see!

      As Eric notes, this is indeed a difficult question. Sabrina's more recent posts have been on this topic:
      An ecological approach to language
      Language isn't magical but it is special
      Language: A task analysis

      There's not a lot of research out in the world yet on this topic, although we think there's a lot of work that's relevant, it just doesn't know it yet. Lots of people are tackling some issues around this question but are coming at it from a representational framework that isn't helping much. Part of our mission over the next little while is to simply raise awareness of the possibility that an option exists!

      Have a look at those post and the others on representation (it's all mapped out on the 'Rough Guide' page) and ask questions! It helps us to know what is and isn't coming through clearly. Also, we have a paper forthcoming in Frontiers on a lot of this - stay tuned, we will blog about it when it's out!

      Delete