Showing posts with label Markman. Show all posts
Showing posts with label Markman. Show all posts

Wednesday, 7 April 2010

In which I finish talking about discrete computational representations

In a previous post, I summarised Dietrich & Markman’s definition of representations and ideas about how representations get their content. While there are many flavours of representation, D&M subscribe to the discrete computational (DC) variety. To summarise the previous post: According to D&M, representations are internal mediating states that govern behaviour. Representations have relations to both the external and internal (i.e., other representations) environment. They acquire content in two ways. The first way is through correspondence, where some internal state connects to some external state. The second is through functional relations with other representations. Representations are transformed via computations.

The main purpose of this post is to summarise D&M's main arguments in favour of discrete representations so that I can refer to these in other posts. I make several comments about the quality of these arguments, but this is in no way meant to be a systematic response to their paper.

Wednesday, 17 March 2010

Whose representation?

One of my pet peeves is when different groups of psychologists use one term to refer to something for which they have multiple, sometimes contradictory, definitions. When I started studying similarity, I wasted a lot of time trying to clarify what everyone meant by relations. See, for one group of people, relations occurred within a stimulus (a cat’s legs are under its body, its whiskers are on its face). For another group of people, relations occurred between two stimuli (you use a hammer to hit a nail). These are very different types of relations and they affect similarity in very different ways. But, by using that one word, relation, the literature was all muddied about how relations influenced similarity. Clarifying the terms (we now speak of structural relations vs. thematic relations) helps clarify how we think about the subject.

The word representation is used in a comparably muddied fashion. Depending on who you’re talking to, representation might refer to something symbolic, perceptual, discrete, or continuous; and these symbolic/perceptual/discrete/continuous things might be transformed or acted on via ordinary computations or differential equations.

To get to the bottom of this, I want to clarify the different ways in which representation is commonly used. Then, I want to figure out how to introduce some precision in talking about representations. This will make it much easier to discuss the problem of representation and to consider the alternatives.

Today’s installment: Discrete computational representations (based on Dietrich & Markman, 2003).

Representations are internal mediating states. Anything that changes / transforms / acts on input to a system in a way that changes / transforms output (i.e., actions) is a representation.
The authors provide four conditions for this definition.

1) There needs to be at least one system, which has internal states governing its behaviour.
2) There needs to be an environment, although this doesn’t have to be the external environment. It could just be an adjacent system.
3) Some types of relations have to exist between the system’s internal states and the environment.
4) Processes must act on the internal states to satisfy goals or solve problems. Dietrich and Markman believe that these processes are computational.

On top of these conditions, the authors argue that semantic content needs to be explicit. In other words, the authors contend that psychological-level descriptions of internal states are real and that this level is more relevant that the physical-level description. Representations and processes are more important than chemicals and neurons.

How representations get their content:

1) The relations between internal states and the environment connect particular internal states with particular external states (i.e., correspondence).
2) Representations acquire some content by virtue of the types of interactions they have with other representations (i.e., functional role).

The authors suggest that 1 contributes primarily to the content of low level DC representations like a vibrating eardrum responding to sound, while 2 contributes to higher level DC representations like “hope”, “democracy” or other abstract concepts. It’s necessary for every DC representation to have at least some content from correspondence to external states.

Now, representations could be either discrete or continuous, but Dietrich and Markman argue that they must be discrete. These terms map on perfectly to the mathematical sense of continuity/discreteness. So, discrete representations are uniquely identifiable. E.g., I have a unique cat representation that is different from all of my other representations. And, discrete representations have gaps between them. My cat representation doesn’t seamlessly transition into my tiger representation (although there may be overlap).

To sum up, this notion of representation is that they are internal mediating states that are discrete and computational. Each representation is uniquely identifiable (discrete) and the processes that act on representations are ordinary computations. From now on, when I’m talking about this type of representation, I will refer to DC (discrete computational) representations.

Dietrich, E. & Markman, A. B. (2003). Discrete thoughts: Why cognition must use discrete representations. Mind and Language, 18, 95-119.