Thursday, 14 October 2010

Is it time to abandon the cognitive / non-cognitive distinction?

We've been having some good debates with Ken Azawa over at The Bounds of Cognition and Gary Williams at Minds and Brains. A recurring theme is where to draw the line between the cognitive and the non-cognitive. A lot hinges on this distinction. For example, whether we consider things external to the brain to be coupled with a cognitive process or to constitute a cognitive process depends on what we're willing to call "cognitive." Ken has written extensively about this on his blog, but this is a simple summary of the problem: [T]he coupling-constitution distinction comes down to a distinction between X causing, or being caused by, a cognitive process, versus X being a cognitive process.

Deciding whether or not something constitutes a cognitive process or is simply coupled to a cognitive process, then, depends on how one defines "cognitive." Distinguishing the cognitive from the non-cognitive also provides a useful level of analysis for describing human behaviour (as opposed to, say, a neural, chemical,  or physical level of analysis). But nailing down this distinction has proven incredibly difficult. Adams & Azawa (2007) provide a nice summary of various positions; for instance, Clark advocating cognition as information processing (Clark, 2007) and Hutchins (1995) describing cognition as computation (this is the good old fashioned cognitive psychological conception). Adams and Azawa, themselves, have proposed two minimum criteria for something to be considered cognitive. "A first essential condition on the cognitive is that cognitive states must involve intrinsic, non-derived content” (Adams & Aizawa, 2010, p. 69). Second, "the cognitive must be discriminated on the basis of underlying causal processes" (Adams & Aizawa, 2001, p. 51-52).

I applaud Adams and Aizawa for having a working definition from which to argue and my intention here isn't to critique this or any other particular definition of cognition. I'm more interested in the broader question of whether the attempt to define anything as "cognitive" makes any sense. I research the psychological basis of similarity and categorisation, so I look at the cognitive / non-cognitive debate as a categorisation problem. Attempts to define cognition seem to proceed from the belief that cognitive processes are truly distinct from non-cognitive processes. By looking at this problem from the standpoint of categorisation, I'm not sure that this belief is warranted. Here's an outline of my thinking on this:

First, what type of category is the cognitive versus non-cognitive? Taxonomic categories cohere around shared properties (as opposed to shared context or goal, e.g.) - and we'd like to think that cognitive things have some properties that non-cognitive things lack - so, at it's base we're looking at a taxonomic category. Taxonomic categories vary in how defining a property or set of properties is for category membership. Some categories are completely defined by the presence or absence of a single feature. For example, whether or not a person belongs to the set of people who are less than 10 years old is defined solely by their age (these are basically rule-based categories). Other categories are based on a dynamic relationship between large sets of properties. For example, cats and dogs are not discriminated on the basis of a handful of nameable features like ears and tails, but we can easily correctly classify them because, well,  cats look like cats and dogs look like dogs - that is, cats and dogs are differentiated from each other on the basis of the unique relationships between properties that occur in cats relative to dogs.

Some taxonomic categories are clearly bounded (dogs really are different from cats), while some are graded. Natural kind categories (e.g., animals, minerals, trees) have clear members and non-members. Even though these may be defined in a very complex way, there is a real distinction in the world and you're either in or your out. Adams and Aizawa's second minimum criterion for something to be cognitive (mentioned above) basically requires cognitive processes to form a natural kind category. Other categories, for example many artifact categories, lack such clear category boundaries. A typical example is the category "furniture." Some things are clearly in, e.g., sofa and table. But, is a rug furniture? Is a picture hanging on the wall furniture? There is considerable disagreement among people about the status of these items - they're not clearly in or clearly out. (it's not worth getting into here, but some people argue that there are graded natural kinds - category members have their properties by virtue of their relationship to natural laws, but there is not a clear demarcation between members and non-members. I think this violates the maxim that natural kinds should form kinds, and it certainly causes problems for using category status to guide inference). I'm choosing to approach the distinction between natural and vague categories empirically - what does it look like when things are in a natural category versus a vague category. I could also have approached the distinction on the basis of underlying causal processes (cats and dogs have different underlying properties which cause them to have different observable properties / there are no common underlying properties that distinguish furniture from non-furniture).  

Given this brief introduction, there are two obvious reasons why someone might be uncertain about something's category status. 1) The thing belongs to a clearly bounded taxonomic category but the person lacks sufficient information. For example, young children might not successfully distinguish between cats and dogs because they have not had sufficient exposure to them yet. In fact, it is common for children who have a family dog to assume all animals are dogs for a little while. Similarly, people might be unaware of definitional properties for certain categories. If they learned the correct definition, then they could successfully sort things into the correct category. 2) Another reason a person might be uncertain about some thing's category status is that the thing might belong to a vague category. That is, there might not be clear criteria for category membership.We can apply rules and definitions to these categories, but this will have consequences, which I'll discuss later.

In order to argue that the uncertainty about the cognitive / non-cognitive distinction is caused by lack of information (Option 1) we must establish that the distinction is as real as the difference between dogs and cats (there really is a difference in the world between cognitive and non-cognitive things). My view is that it is extremely unlikely that the distinction is real in the way that cats are different from dogs. Why? Discriminability of natural kind categories increases with experience (including scientific investigation). The more exposure I have to cats and dogs, the faster and more accurately can I sort them into their proper groups. And, any one with sufficient experience will classify these animals in exactly the same way. In other words, experience improves discriminability and there is considerable agreement among people about the classificatory status of exemplars.

If anything, the opposite seems to occur with the cognitive / non-cognitive distinction. Experts on the cognitive / non-cognitive distinction continue to struggle with borderline examples that seem to defy classification. And these experts passionately disagree with one another about where to draw category boundaries. So, expertise doesn't improve discriminability or increase agreement.

Perhaps we just haven't arrived at an adequate definition of cognitive versus non-cognitive yet? I could choose to define cognitive as anything that involved a human brain. Now I can easily classify things as cognitive or non-cognitive by applying my (bad) rule. This is the strategy employed by most cognitive scientists and philosophers of science who take the cognitive / non-cognitive distinction seriously. The trouble is that these rules always show themselves to be insufficient on the basis of counter-examples. My rule as stated - anything involving a human brain - would include, for instance, a pickled brain in a jar. So, I think, I'd better refine that, and on I go adding additional criteria. This is basically the state of the discipline - trying to refine our rules so that no one can think of annoying counter-examples.

Although it is trivially easy to make up rule-based categories, the utility of these categories is dependent on the state of affairs in the world. To illustrate, try coming up with a rule for distinguishing cats from dogs. We can clearly agree on what is a cat and what is a dog because there are real differences between cats and dogs in the world (they are natural kinds). This means that it is conceivable that we might discover a successful rule for classification. For instance, research on such biological differences reveals that cats and dogs have different genomes and we can use the presence or absence of a particular genome as a basis for classification. It is reasonable to expect there to be a good rule to discriminate between cats and dogs because of the state of affairs in the world - the fact that there really is a difference between these categories. Why is using the genome a good rule? Unlike my bad rule for distinguishing cognitive and non-cognitive, there don't seem to be any troubling counter-examples to the genome-based rule for cats and dogs. Applying this rule is also likely to lead to considerable agreement with our perceptual judgments of cats and dogs.  

However, rule-based strategies lose utility when applied to categories that are not clearly bounded (i.e., that are vague). We are still perfectly welcome to form such rule-based categories if they help us to achieve our goals, but we're going to have to suck some things up. Continuing the exercise from above, try coming up with a rule for distinguishing furniture from non-furniture. There are, undoubtedly, things that we agree are furniture (e.g., sofas) and things that we agree are not furniture (e.g., potatoes), so the distinction isn't arbitrary.But, then there's all that stuff in the middle (rugs, paintings) that we don't know how to classify. This uncertainty isn't caused by lack of information, it's caused by the state of affairs in the world. Rugs really are kind of furniture and kind of not furniture. Rugs do not have clear membership status with respect to the category furniture. Because of this state of affairs, trying to come up with a rule for category membership will result in considerable disagreement and tons of troubling counter-examples.

Someone might say "Hey, furniture is just a label we've invented. The category appears vague because the label has been used imprecisely. This is our own fault for not clearly defining our terms." And from this basis this person might start trying to re-define the furniture category along more rigorous lines. I encourage you to have a go at this. Let's start with clear category members like sofas. Maybe furniture is something you sit on? What about tables? - those are pretty clearly furniture. Well, sofas and tables have legs so does furniture have legs? Not all sofas and tables have legs, so this can't be it. Maybe furniture is the stuff that you typically use to decorate a dwelling? Okay, but people use flowers to decorate with - are they furniture? And so on.

These candidate definitions appear to be less useful than our intuitive sense of what is and is not furniture.A similar attempt at re-definition is ongoing in the cognitive / non-cognitive debate and I see no reason that it will be any more productive that an attempt to redefine what counts as furniture.

Granted, this line of attack (trying to identify definitional properties) would also fail to discriminate between cats and dogs. The fact that we can use the genome to discriminate those categories is not a rule any of us could have figured out just by sitting around and thinking about it. But, the discriminability of cats and dogs, in general, made it possible that such a rule could be found. Most importantly, the fact that we could tell the difference between cats and dogs in the first place gave us a criterion for deciding whether the rule was any good (i.e., does discrimination on the basis of the genome agree with discrimination based on perception?). With the vague category, furniture, we don't have that existing discriminability. So, how can we decide if a given rule for furniture is any good?

The cognitive / non-cognitive distinction has all the markings of a graded category. People can't successfully distinguish between cognitive and non-cognitive, even given considerable experience. Those with the most experience in the domain continue to disagree about what constitutes cognitive vs. non-cognitive. There is less agreement about what is cognitive than there is agreement about what is furniture. Like the furniture example, attempts to define cognitive result in lots of problematic counter-examples. And, like the furniture example, the fact that we can't adequately discriminate between cognitive and non-cognitive means that we have no clear basis for deciding whether a potential rule is any good.

Some might argue that we will some day gain additional information that exposes a real, clear distinction between the cognitive and non-cognitive. In other words, the fact that this category looks graded now doesn't mean that we won't acquire some knowledge in the future that exposes it as a natural kind (like cats and dogs). Aren't there some natural kind categories  - like chemical elements - whose basis for differentiation wasn't known until fairly recently? Might the cognitive / non-cognitive distinction be the same kind of thing? I would say "no".

First, the concept of natural kinds is not undisputed (see here for a quick and dirty summary of this idea). Even seemingly clear divisions like "species" can break down upon closer inspection, especially when you move from cats and dogs to varieties of bacteria. Second, the cognitive / non-cognitive distinction doesn't seem likely to meet criteria that are often suggested for being a natural kind (e.g., shared intrinsic properties, accordance with natural law). Let me draw a contrast with the chemical elements. Prior to the discovery of electrons, scientists began to tease apart chemical elements experimentally. Because elements really are different from one another, they behave differently in the lab. If I set hydrogen on fire it produces a blue flame. If I burn barium, the flame is green. In advance of knowing anything about the rules that distinguish elements, it was possible to empirically distinguish them because their properties have lawful relationships to the world. Okay. On what empirical basis should we discriminate the cognitive from the non-cognitive?

Vague categories do capture some regularities in the world. But, they defy rule-based categorisation and there will always be borderline instances that are not clear category members or non-members. This makes vague categories serviceable for everyday communication (there's no reason to eschew the word "furniture"), but poor foundations for a science. 

If the cognitive is a vague category  - and I think that it must be -  the best we can hope for is a rule that chucks the clearly cognitive cases (whatever those are - I don't think we could even agree on this) in one bin and all the clearly non-cognitive cases (again, whatever those are) in another. At the moment, it's not clear to me what we gain from such a rudimentary rule, although this will be the topic of an upcoming post. The usual benchmarks of a useful category are that it helps you make inferences and guide action in the face of incomplete information. Can potential definitions of cognitive vs. non-cognitive achieve these goals?

As I said at the beginning, a lot of debates hinge on how we define cognitive versus non-cognitive. These debates seem to proceed on the assumption that an answer to this problem exists. That is, that there is a real difference between the cognitive and the non-cognitive in the same way that there is a difference between cats and dogs or between hydrogen and barium. However, the past 60 or so years of psychology suggest that the cognitive is a pretty poor candidate for this type of category. The problems that plague definitions of the cognitive are the same problems that plague vague categories like furniture. There is no additional information that will some day clarify whether a rug counts as furniture. That's just not the kind of category it is.


  1. Adams and I agree that the usage of "cognitive" is a mess. And, indeed, we think this is an enabling condition of extended cognition. Lacking an explicit statement of what one means by "cognition" makes it seem more plausible that cognition could extend.

    But, part of what we are urging in discussion of the "mark of the cognitive" is explicitness about what one is claiming. So, if someone claims that cognitive processes are sometimes realized by processes spanning the brain, body, and world, then at least part of the expository burden one bears is to give the reader some idea of what one means by a "cognitive process".

    To give some hint of what is in play here, on my blog I've distinguished a whole family of related hypotheses including the following:

    The hypothesis of extended scientific cognition.
    The hypothesis of extended folk cognition.
    The hypotheses if extended autopoeitic cognition.
    The hypothesis of extended behavior.

    One might also add
    The hypothesis of extended enabling conditions for cognition.
    The hypothesis of extended necessary and sufficient conditions for cognition.
    The hypothesis of the extended explanans of cognitive success.

    I don't see any consensus emerging on how to use "cognition", but one can at least hope for greater explicitness.

  2. It's not that the definition of cognition is complicated; Sabrina's suggesting that the category of cognition isn't the kind of category you and Adams seem to think. Worse, it's not even really doing any work; all it tells you is that a rock isn't cognitive and a person is.

    You want a definition of cognition that makes it a natural kind (reading the 'Bounds of Cognition' paper); this analysis suggests you won't ever get that.

  3. Ken,

    The point is that the various definitions of cognition all fail. My argument is that these failures don't reflect a lack of information or the immaturity of cognitive sciences as a discipline, but the nature of the cognitive/non-cognitive divide, itself (i.e., the type of category). The proliferation of hypotheses you mention supports the inherent gradedness of the category (experience leads to disagreement rather than convergence).

    Look, greater explicitness is basic good practice. But as I pointed out, this explicitness will come at a price. Any definition of the cognitive will either be plagued by troubling counter-examples (of the type you and Adams regularly provide) or will be general to the point of uselessness.

  4. Well, I'm not asking for definitions of "cognitive". (There are two or three places in our Bounds of Cognition where we state this explicitly.) Much less am I proposing that we should offer a definition of "cognitive" that covers all past and/or current uses.

    What I am proposing is that we need a *theory* of what cognitive processes are. And, I agree that there is diversity of opinion on what cognitive processes are. so, what I expect will happen is that some theories of cognitive processes will just turn out to be wrong.

    And,I do see that Sabrina is going after the idea that, while Adams and Aizawa want a theory of the cognitive that makes the cognitive a natural kind, there is none such. But, this kind of diversity of opinion is pretty common in philosophy. Go to the Stanford Encyclopedia of Philosophy and check out most any topic, such as causation, concepts, realization, etc., etc. There are going to be many such things where there is a diversity of opinion. This troubles me not at all.

    But, how about this. How does "affordance" shake out? You won't get as much diversity here as with "cognitive" since it was specifically introduced as a technical term, but as the English professors often note authorial intent often gives way to the uses of the reader. Given the origins of "affordance" there is a surprising amount of variance.

    But, also note that this post is more or less directed toward me, that I have the burden of explaining what extended cognition is. For my part, if it turns out that cognition is not a natural kind, so much the worse for the hypothesis of extended cognition. What I care about is making sure the pychologists can go about there business pretty much as they have for the last 40 years or so without being troubled by ideas of extended cognition.

  5. Here is another way to approach the point of my last paragraph.

    Argumentative rule of thumb: When critics, such as Adams and Aizawa, use a distinction, challenge the distinction.

    This is a pretty good rule of thumb. It has been used (by several folks) against our invocation of the coupling-constitution distinction and (by Andy Clark) against the cognitive-noncognitive distinction.

    But, Adams and Aizawa are one step ahead of this. We have relied on distinctions that the extended cognition folks have relied upon. So, if you have a problem with those distinctions, don't lay that at our doorstep. Take it to the EC people. Moreover, taking them down a notch would be more of an accomplishment, since most of them are bigger fish than Fred or me.

  6. The post is not, in fact, directed at you. I think the EC folks have the same problem (I mention Clark's construal of cognition in the introduction, for instance).

    I'm not criticising the distinction you and Adams draw because (as I explicitly say) that is not the purpose of the post. I argue that these multiple attempts to draw an appropriate distinciton all fail because of the type of category cognitive processes form. Criticising the distinction is not any use for graded categories. I address this directly in the post by going through attempts to redefine furniture. It's never going to work. Critiquing the definitions themselves is a waste of time. The more informative argument is to acknowledge the nature of the category prevents adequate definition (or, if you prefer, theory).

    To have a theory of the cognitive - to be able to say what counts as a cognitive process and what does no count (which you say you want)- is to establish criteria for cognition. This is the function of a definition, so you ARE asking for definitions of the cognitive.

    The fact that differences of opinion are common in philosophy is absolutely irrelevant to this discussion. It's fine for philosophy to argue about the cognitive, but such an inherently vague concept is a poor foundation for a science.

  7. Also Ken, how do you justify the idea that cognitive processes form a natural kind?

    I'll happily ask the same of an EC advocate or anyone else who claims to study cognition, so this isn't being laid at your "doorstep". The point of the post was to spark debate about this topic, not whether or not there is diversity of opinion about the specifics of what is labelled a cognitive process. That is a separate issue entirely.

  8. I quite like the form of the attack here; take what we know about categories, and how we use them, and use that to examine the utility of the category 'cognitive processes'.

    One potential problem would be that if you were relying on a particular definition of categories that depended on the definition of cognition, this would fall apart. But that doesn't seem to be the case, so that's useful :)

    I still think we need to get a positive result from this, however. If we don't talk about cognition, what do we talk about? What is it we study?

    One answer is to keep things specific, and not try to categorise them. This helps avoid some obvious pitfalls (calling a bunch of different things 'similarity' when it's not at all clear it's the same process in different tasks, for instance) but you run the risk of just chasing phenomena: we need some general theoretical principles or else I really am just studying finger wiggling and that seems a bit pointless.

    So how do we keep this scientific? What can I say I study that still allows room for theory, and not just phenomena?

  9. I think you're right that there needs to be a positive contribution. Yes, our work does need to be grounded in theory, but I'm not sure I see the benefit of labeling my research as cognitive. I'll get into this more directly in an up-coming post.

  10. I'm working through Chemero's book again getting some posts ready, and came across a discussion of Ken's stuff. In one of the footnotes Tony says there is no such thing as a 'mark of the cognitive'. Annoyingly he then seems to not say anything to back this up :)

  11. Gary posted this the other day...about halfway down I saw this:

    "This is why complementarity-based theorists of distributed and extended cognition are in turn sometimes criticised by more extreme anti-cognitivists for “not proposing that the very idea of cognition is itself a mistake,” and because we do “not renounce cognitive science” (Button 2008, pp. 88–89; compare Malafouris 2004, Dreyfus 2007)."

    The references:
    Button, G. (2008). Against ‘distributed cognition’. Theory, Culture & Society, 25(2), 87–104.

    Dreyfus, H. (2007). Why Heideggerian AI failed and why fixing it would require making it more Heideggerian. Philosophical Psychology, 20(2), 247–268.

    Malafouris, L. (2004). The cognitive basis of material engagement: where brain, body and culture conflate. In E. DeMarrais, C. Gosden, & C. Renfrew (Eds.), Rethinking materiality: the engagement of mind with the material world (pp. 53–62). Cambridge: McDonald Institute for Archaeological Research.

    No idea if any of this is any good but the quote about cognition being a mistake stuck out.