Tuesday 12 March 2013

A taxonomy of information

Over the past several months I've been thinking about how perception falls within a hierarchy of types of information use. This was spurred by my ideas about an ecological approach to language, in which perceptual information and linguistic information are distinguished on the basis of the relationship between event structure and meaning. As part of this work, I defined perception as the apprehension of structure in an energy array where 1) the structure is specific to an event or property in the world, 2) where the meaning of the structure (for that organism in that task) is about that event or property (i.e., a dog's bark is about the event of a barking dog), and 3) where the meaning of the structure must be learned (or, more correctly, where an organism must learn how to coordinate action with respect to this structure). I arrived at this definition because it seemed to capture the ecological approach to perception and because it makes it obvious how perceptual information and linguistic information differ (also because I am crazy-obsessive about definitions).

From this point it became clear that we could use a set of properties to characterise information types. And, looking across this set of properties, it became clear that there is more than just perceptual and linguistic information. I use the term "information" to refer to any structure in an energy array that precipitates behaviour in an organism. The taxonomy below provides a common language for talking about all types of information - from symbols to reflexes.

I first walk through the major dimensions, as I see them, that define differences between information types. I will then apply this framework to perceptual and linguistic information and suggest how these dimensions might also be used to capture other types of information (e.g., reflexes). Finally, I add some more speculative dimensions and say a bit about how I think this taxonomy might be useful.

Each of the properties in my initial definition of perceptual information represents an endpoint of a dimension. It will now be useful to consider the opposite poles of these dimensions. I've ordered the poles 1 and 2 within each dimension to reflect my intuition about their simplicity - more "basic" or "simpler" characteristics are listed as "Pole 1" and more "complex" characteristics are listed as "Pole 2". I've also added a dimension (Learning) to the original list.

Dimension: Detection

Pole 1 - The organism does not have to learn to detect the structure - it can detect the structure without prior learning.

Pole 2- The organism must learn to detect invariant structure in energy arrays.

Dimension: Specificity

Pole 1 - The structure in the array is specific* to an event in the world. For a given ecological niche, this structure (in light, sound, whatever) is present only when a given event or property of the world is present.

Pole 2 -  The structure in the array is not specific to a particular event or property.

*This dimension captures the relationship between the world and structure in energy arrays. For Pole 1 this relationship is 1:1; for Pole 2 this relationship is many:1. Most people refer to Pole 1 as a specification relationship, but there is some unfortunate fuzziness in the way the term "specification" is used. Ecological laws make it so that any invariants in an energy array will specify something - that is, the structure will be in a 1:1 relationship with some property or event in the world (this is why the variable can be invariant in the first place). If this is the case, then it is deeply confusing to talk about whether organisms can use non-specifying variables to control action. What we really mean is "can organisms use variables that don't specify the property or event of the world that is relevant for a particular task?" So called non-specifying variables specify something - just not the thing that is most directly related to the task at hand.

This, then, might be a redundant dimension. The important work might be done in the next dimension - Aboutness. I don't know whether there are cases where the information would occupy Pole 2. Because of this uncertainty, I'd like to keep it in for now, and get rid of it if I'm convinced it's not doing any work.

Dimension: Aboutness

Pole 1 -  The meaning of the structure in the array is about* the event or property specified by the structure.

Pole 2 - The meaning of the structure is about something else - not about the property specified by the structure.

*Aboutness is similar to the specification versus convention distinction I wrote about in the language posts. Meaning is defined in terms of an organism's ability to take appropriate action as a consequence of detecting information. When the action is related to the event in the world that caused the structure in the energy array (e.g., we duck upon hearing a loud noise) then the information is defined in terms of Pole 1. When the action is not related to the event in the world that caused the structure (e.g., when we duck upon hearing the exclamation "Duck!"), then this information is defined in terms of Pole 2.

The event in the world that caused the structure of the spoken word "Duck" is the control of the speaker's vocal tract and voice to create the phonemes comprising the word. As listeners, we don't care about this event. We care about the conventional meaning of the event, which is that if you don't lower your head it's going to be hit by something.

Dimension: Learning

Pole 1 - The organism does not have to learn how to use the information.

Pole 2 - The organism must learn how to coordinate its actions with respect to this structure.


With terminology in place, we can see that perceptual information falls at Pole 2 for the Detection and Learning dimensions. And, it falls at Pole 1 for the Specificity and Aboutness dimensions. In contrast, linguistic information falls at Pole 2 for Detection, Aboutness, and Learning. It falls at Pole 1 for Specificity (because the structure in the energy array caused by saying a word is specific to the event in the world that caused it).

I said that this terminology opened the door to thinking about other types of information. Consider reflexes. Reflexes are precipitated by structure in an energy array (e.g., the startle reflex is precipitated by a loud noise). Organisms don't have to learn to detect this structure and they don't have to learn to coordinate their actions with respect to it. Thus, reflex information falls at Pole 1 along all four dimensions. Instinct information may be similar.

I would like to look at non-human animal behaviour to see what other combinations are possible. For instance, can we identify cases where organisms must learn to detect structural invariants but don't need to learn how to coordinate their actions with respect to them (i.e., certain actions are "hard-wired" in the presence of the right information)? Perhaps instinct information falls into this category? This type of behaviour would fall at Pole 2 on the Detection dimension and Pole 1 for Aboutness, Specificity, and Learning.


Another two dimensions may also be relevant to this taxonomy, but my ideas about them are less well-developed than for the other dimensions. These dimensions are Inherentness and Continuity.

Dimension: Inherentness

Pole 1 - The structure in the array is inherently meaningful to the organism in that it causes some feeling (e.g., pain, pleasure) in the organism that is not related to any learned fact about the structure.

Pole 2  - The structure in the array is not inherently meaningful

When I look at a very bright light, it is inherently unpleasant. When I hear a loud noise, it is inherently unpleasant. Touch can be either pleasant or unpleasant depending on how it's done (and depending on the context, which makes this dependent on learned meaning as well). In contrast, some information has no inherent meaning. For babies in EJ Gibson's visual cliff experiment, the visual information that specified a surface discontinuity had no inherent meaning; babies who weren't crawling were not afraid of the drop because they did not know what the visual information meant.

Inherentness has some things in common with the Learning dimension, but I think it is rightfully distinct. There is a phenomenology implied in the Inherentness dimension that is not necessary for the Learning dimension.

Dimension: Continuity

Pole 1 - The structure in the energy array precipitates an action in the organism, but this action is not coupled to the information over time

Pole 2 - The structure in the energy array precipitates an action and the organism couples its actions to this information over time

This dimension captures whether information is used in the continuous control of action or whether it acts more like a cue. I want to mention this because cues often get the short shrift in ecological psychology, although they clearly matter. Consider the redness of an apple. This structure in the visual array may cue subsequent action (i.e., I might reach for the apple because the redness tells me that it's ripe), but I do not use the redness in the continuous control of action. Thus, the continuity dimension may allow us to sub-divide some types of information - perceptual information like the redness of the apple versus perceptual information for the catchability of a fly ball. The continuity dimension might be an additional way to distinguish between instincts and reflexes (instincts fall along Pole 2 and reflexes fall along Pole 1). Because ecological psychology has focused on information for action, we don't have a very good, non-cognitive way to talk about cues right now and this needs to change.

What can we do with this taxonomy?

The initial purpose of defining information types using a set of properties was to help me distinguish perceptual and linguistic information. So, I hope this taxonomy will help structure further empirical work on an ecological approach to language. But, the taxonomy has now grown to encompass other types of information. I think this broader set of properties could be useful in the domain of animal cognition. The taxonomy provides a language for comparing any behaviours that are precipitated by structure in an energy array. Take, for example, vervet monkey calls. Vervets have a small vocabulary of calls that they use to warn each other about potential danger (e.g., hawk). Vervet's respond to these calls with specific, functional behaviour (i.e., they don't just run) and they respond even when there is no perceptual evidence of the threat (i.e., they can't see a hawk).

Even though this is clearly an example of vervet monkeys using symbols to communicate, no one is tempted to say that they are using language. The traditional move is to say that the animal calls lack essential properties of language use (e.g., productivity), but precisely which properties are essential to language is highly debated (see Chompsky versus Everett on recursion). The taxonomy outlined in this post provides another way of comparing vervet calls to human language. For example, do vervet monkeys have to learn to coordinate their behaviour to the calls (i.e., are the calls at Pole 1 or Pole 2 for the Learning dimension)? If the link between vervet calls and behaviour is relatively fixed (Pole 1), then this is an important difference from human language, where the same acoustic information can mean different things across languages. 


So, this is what I've come up with for now. The big challenge at the moment is figuring out how to communicate these ideas efficiently. I've been thinking about this stuff for months and I still have to refer back to what I've written to remember what the different poles are. And the table I've included is, admittedly, not very helpful. If anyone has any clever ideas for presentation, I'd love to hear them!


  1. Information as defined by "any structure in an energy array that precipitates behaviour in an organism" I believe might lead to some unwanted ontological issues. The argument against that I propose however, relies on that the above quote is an ontological definition of information and not an epistemological one. So if it is an e-one and not an o-one, read no further =).

    I argue my point on the basis of temporal and realism-congruent aspects. The information that a (smallish) rock reflects in an energy array lends itself to be realised by you and me to be pick-up-able (and actualisable as such). A redwood tree's information does not. However, following from your definition, before life on earth, the information of the rock did not exist; if your ontological definition of information includes precipitation of behaviour, then, if no agent's behaviour is precipitated, the information did not exist (and idealism we want to avoid, right?).

    With this said, epistemologically, information does indeed need to precipitate behaviour otherwise we can't have direct perception, or probably perception at all and it turns quite non-sensical -therefore it is an important aspect in how the relationship works. I certainly think that everything else in your post takes us a far way in the application of information and how to structure it sensically (as if that word would be the antonym to non-sensical.. ha.. ha.. ha..). Although it is way above my head to comply to your request at the end of the post..

    Actualisation and realisation (to be consistent with realism and pragmatic aspects) I believe necessarily depends on information the way you define it, without the ontological assumption that it needs to precipitate behaviour but with the epistemological assumption that it does.

  2. I didn't need to use an abstract temporal example, sorry about that, it suffices to exemplify it in this way; epistemologically, my behaviour is not precipitated by the information of the rock standing facing away from it, or, not having perceived it (say walked backwards out in a field), i.e. it doesn't exist. But the information is still "out there" though, right? Am I nitpicking? I feel nitpicky.

  3. Patric,

    I think your distinction is valuable.

  4. I think the confusion comes from not keeping affordances and information about affordances separate. Affordances persist (they are dispositions) but information about affordances comes in and out of existence (as the light changes, or as the organism moves appropriately or not). It's vital to keep these separate.

    Not all structure in energy arrays serves as information, which I think is part of what Sabrina is up to here. It's information if it's involved in behaviour somehow, otherwise it's just structure.

  5. I am not entirely sure about the first part, it seems to me it just moves the issue from the concept of affordances to that of information. Your last sentence however leads me to the conclusion that the concepts involved needs to be more specifically defined (not implying we thought of them as perfect to begin with).

    Structure cannot "come in and out of existence", that would rely on us perceiving the same particles, which we are not (I understand this is not what you are saying, I'm just clarifying for myself). However, structure can exist and not exist. Structure is only information if perceived. I think I want to stick to 'perceived' as opposed to 'precipitate behaviour' here though, I'm just temporarily erring on the side of caution, since I am unsure if not it may have consequences for cases where we do perceive a structure (even consciously) and realise an affordance but do not act on it -realisation but not actualisation. Unless you want to define behaviour as 'sensory receptor change'. I feel like there are things that we perceive but do not affect behaviour. Work under progress.

    Either way, there are then three ways in which information can not exist, when a) there exist an energy array but it does not elicit a structure (I sense issues here, due to dividing 'energy array' and 'structure' -I am unsure if it might not be better to combine them in one concept, also this work under progress), b) there exist an energy array eliciting a structure but it is not perceived (because the agent doesn't have sensory receptors for it, or, the agent is simply not perceiving it), and c) there does not exist an energy array (and thus there can't exist a structure). I think this needs some work still by explicating the full practical consequences, but I suggest it valuable in order to produce clarity of the concepts and not be confused by, or elicit confusion in readers, about the whole existence/non-existence type stuff (which I may have inadvertently started in the first place =D).

    1. Some terminology:

      Arrays have structure, by definition. The optic array, for example, is a structured light field. (A homogeneous light field (a Ganzfeld) will stimulate light receptors but does not lead to the perception of anything.)

      So, your (a) doesn't work, but the other two are fine. You can also add 'there is an energy array but it doesn't contain the structure of relevance'. In the case of the outfielder problem, if you aren't running correctly, neither the linear optical trajectory nor the cancelled optical acceleration are present in the array.

      This, by the way, is the sense I had in mind when I said structure can come in and out of existence. Invariant features of the optic array can be there or not there at different points in time.

    2. Thank you for your responses, I am grappling with the understanding of how invariant features can exist and not in an objective perspective (and instantly felt horribly biased by philosophy). Either way, thanks again, looking forward to future posts from you both.

  6. I blogged some comments about information and meaning recently, ( http://anti-ism-ism.blogspot.nl/2013/06/codes-information-meaning-and.html ) and have the feeling what you are describing in some cases is in fact a code that links an information structure to meaning.

    Where I can see great use for the taxonomy is the concept of codified assemblies: multiple codes that together produce meaning or a novel structure. This concept is f.i. used to describe how different genetic and epigenetic operations (transcription, splicing, synthesis, etc.) eventually lead to construction of a cell. As in molecular biology, your taxonomy seems to be functional so one wonders whether they could act as an assembly as well.

    A comment on learning as a separate function that I need to give much more thought, but here goes:

    When is learning, learning? Which is to say, there are quite a few things I do each day, that I have never done before. I've never written the exact words in this post in a sequence before, this morning I constructed a swing for my daughter for the first time and this evening I will try my new rib rub recipe on the BBQ.

    If you zoom in on these things, time-scale wise, and try to find out about novel information structures that I have to coordinate with, the numbers are mind boggling. Learning seems to be a description of coordination on a very specific range of timescales, the ones slow enough so we can observe change as a gradual process.

    I do believe the term coordination is correct as a general term to describe these phenomena.

    I often use the BigDog on ice video as an example: http://www.youtube.com/watch?v=cNZPRsrwumQ
    In this video around 50secs BigDog walks like Bambi on ice, slips and recovers and slips and recovers. Very much coordination going on. This is repeated in slow-motion.

    None of the movements are pre-programmed, this thing runs on an old pentium computer and a go-cart engine. There is an algorithm that basically selects in real time the most stable solution from information streams of three time scales: real time sensor info/energy needs (fast), goals/heading (medium), fuel supply, motor status (slow).

    The thing is, the stable solution selection, the coordination, when slowed down resembles using some learned skill and applying it just like the examples I gave above. All the learning was however done by the engineers and was constructed into the robot by its design (in a way: as a memory of that learning process).

    Same way, humans benefit from the learning that was done by natural selection and our coordination with structures that take some time, are at a general level, not very different or perhaps mostly calibration.

  7. I've been meaning to respond to this for a while and now I finally have time for a quick question!

    Why do you place linguistic information at Pole 1 for Specificity? Certainly it's specific to the dynamics of vocal articulation, but linguistic info isn't About that, unless you're a baby first learning to speak. As you pointed out, my using the word "dog" is not specific to me having perceived a dog that is present, even if it's About one. And if I've let down my friend Mark and he says "it's fine," his words are not specific to him actually being fine or to him being upset, even though his words are About his mental state. It seems like Aboutness should affect classification along the Specificity axis.