Sunday, 5 September 2010

Assume the Cow is a Sphere

One of my favourite jokes goes like this:

A physicist, an engineer, and a psychologist are called in as consultants to a dairy farm whose production has been below par. Each is given time to inspect the details of the operation before making a report.

The first to be called is the engineer, who states: “The size of the stalls for the cattle should be decreased. Efficiency could be improved if the cows were more closely packed, with a net allotment of 275 cubic feet per cow. Also, the diameter of the milking tubes should be increased by 4 percent to allow for a greater average flow rate during the milking periods”.

The next to report is the psychologist, who proposes: “The inside of the barn should be painted green. This is a more mellow colour than brown and should help induce greater milk flow. Also, more trees should be planted in the fields to add diversity to the scenery for the cattle during grazing, to reduce boredom”.

Finally, the physicist is called upon. He asks for a blackboard and then draws a circle. He begins: “Assume the cow is a sphere....”.

I've been arguing (in a good way) recently with ,who co-authored a book arguing against embodied/extended cognition called The Bounds of Cognition. I've yet to read the book, but the discussions have already got me thinking about posts on tasks and work; more on those later.

Cows don't look like this
The first stumbling block in the discussion, though, is an old one; it's the spherical cow problem, and philosophers do it just as much as physicists. For philosophers, the problem is the use of over-simplified examples as analogies designed to explore some hypothetical argument, to see where it takes them. These analogies are generally designed to contain just enough information so that you can clearly make your point - however, the reason I'm a scientist is that I'm always worried about whether you're actually talking about anything that exists.

Take this example: Ken and I were discussing whether something that isn't brain can be considered a constitutive part of cognition just because it's used during a cognitive act. This, as far as I can gather, is Ken's attempt at a knock-out punch for extended cognition: he refers to it as the coupling-constitution (C-C) fallacy, the assumption that just because something has been coupled to a cognitive system doesn't make it a constitutive part. He's right that this is an error you can make, although I don't agree that it's intrinsically a fallacy, just something to watch out for.

I said
Any task (cognitive or otherwise) entails that some work happens: things have to occur to get me from the beginning to the end of the task. Step 1 is therefore do a task analysis, and identify what is required for an organism to do whatever it is you are trying to explain. Step 2 is to identify what is performing that work. If any of the necessary work is being done by something other than the brain, then it's surely game on as far as embodied cognition is concerned.
His example was then
So, here is where the rub comes. Take this case. Suppose you have the task of computing the first thousand prime numbers and you do this by pushing the return key on a computer that runs a program that prints out the first thousand prime numbers with a delay of 100 year. So, I would say that the computer uses non-cognitive processing to help you accomplish this task. The cognitive processing ends at about the time you finish pushing the return key. Doesn't it stop by the time you die, even though the computer does complete its run until after you die?
The question is, is the computer a constitutive part of your cognitive system? Is this extended, embodied cognition?

It's a red herring, though - we're now in spherical cow territory. It would be very simple to have an argument until we had refined this example with counter-examples and clearly established where each example lead us. But this will have told us nothing about embodied cognition This simply isn't the kind of task any potentially embodied cognitive agent usually gets up to - embodied solutions will be smart, I think, which suggests that they will be about problems reliably faced by the agent (we're talking evolutionary time here).

This kind of move is entirely standard in philosophy, and I had forgotten about it until recently. My go-to examples for embodied cognition are always empirical ones - for instance, the perception-action solution to catching a fly ball, which bypasses the need to compute calculus by simply requiring you to move so as to make a specific optical pattern obtain. The advantage is that my example isn't an analogy - it's an actual task that people do, with empirical evidence to help disentangle the various options.

Philosophers spend a lot of their time arguing about these examples, and it's always been a real frustration for me because they aren't about anything real. and there's no clear way to address them empirically, with data. The goal is noble: they are supposed to be a way of clearly thinking about the just the meat of the issue at hand. But I am convinced they are intrinsically flawed, because there are generally empirical ways to answer these questions which make for a much stronger argument. Aizawa's C-C fallacy is a great example - he has specific examples of how the error might occur, but it's not always clear when they might ever  actually occur, reducing it from fallacy of thought to mere risk of doing business incorrectly. (There's more to it; he's also quite rightly highlighting how some people get very over-excited about non-linear dynamical systems and walk right into the error.)

It's time for philosophy to start reading and engaging with the primary literature on these types of tasks (the outfielder problem, steering and locomotion, etc). One problem is that the scientist authors often don't refer to their own work in terms of embodied cognition, but they often are. Perhaps it's time to make a science reading list for philosophers, and see if that helps? It would be nice if we could come up with a solution to the problem - maybe a series of posts on some of these topics? The coordinated rhythmic movement posts I'm working on will certainly be one of these; Kelso's coordination dynamics approach to this (with order variables, etc) is a staple of the philosophical discussions and one of the points I want to get to is that it is entirely incorrect, but that there is a better way to talk about that task (spoiler alert: my empirical work will receive favourable reviews :)

Cows are not spheres and arguments that assume this will never convince me. The science literature does have ways of talking about things like embodied cognition that aren't simply analogies, and it's time for philosophy to get in on these. Picking a single empirical result and running with it is also not a solution, though; you must engage with the literature in context.


  1. Andrew,

    I read your discussion over at Ken's blog and have been reading some of your recent posts and I like your angle on these issues. I think you are entirely right about how many computationalists just assume that a particular task (e.g. "perceiving spatial layout") must be done in this horribly nightmarish way. This is largely from an over reliance on artificial thought experiments and ignorance of modern biological science. I think that is the real beauty of Gibson's work though. He was perhaps the first person to make perception of the environment seem like a natural process and not something that requires seemingly magical powers (generating "sensations" or creating 3D phenomenal models).

    Also, in regards to current embodied/embedded philosophy of mind, don't give up hope on us philosophers! Some of us actually read Gibson and try to keep up with empirical psychology.

    One last thing...I am going to be applying to Indiana's philosophy/cognitive science joint-track PhD program this December. Any advice?

  2. I'm glad you're reading Gibson and liking it; it's amazing how sensible it all sounds :)

    Any thoughts on how what I'm saying sounds to philosophy-trained ears would be appreciated. I've done a lot of philosophy of science/mind and known a lot of philosophers over the years so I'm not shooting entirely in the dark, but I've always been aware of a certain level of...incommensurability and I've had lot's of trouble over the years finding ways to over come that.

    Re IU: yes, muchos advice! I thoroughly recommend it, it was a brilliant intellectual atmosphere. Send me an email ( and tell me what your plan is (who you want to work with, what you plan to do, etc).

  3. Ok, how about the use of gesture? There is real psychological research on this, right? When a person uses a gesture in the course of performing some task, is the gesturing cognitive processing or not? Asking this does not seem to me to be tantamount to assuming a spherical cow.

    My colleague Fred Adams discussed the issue with Susan Goldin-Meadow (who works on gesture) and she thought that the question was reasonable enough. Moreover, Fred tells me that she thinks that gesturing is not thinking.

    And the coupling-constitution distinction is perfectly well in play in the gesture case. Does gesturing merely causally influence cognitive processing or is it constitutive of cognitive processing?

  4. But, look, it begs the question against me to suppose that if only knew more science I would agree with you. Maybe I would, but maybe I wouldn't. (Scientists know plenty of science, yet still disagree.) Moreover, I don't propose that, if you knew more philosophy, or more Fodor, or more Chomsky, you would see things my way. You might, but you might not.

    It's probably a mug's game for me to try to show I know enough science for the job. You can always say, "Not good enough".

  5. a) It's not that I think you'll agree with me if you do more science; it's that I think you'll at least be arguing about something that actually matters. My spherical cow argument is that the thing philosophy is talking about simply might not reflect what actually is; argue about things people actually do (steering locomotion, etc), not made-up things like computing prime numbers.

    And at one level, it may be a mug's game for you (a philosopher with things of your own to do) to try and cover everything; I can't cover it all either. But this is why various collaborations and dialogues are a good idea - at least we might get a common set of examples to argue about. Ask me annoying questions, for instance: I may know something useful! :)

    b) Gesture is an interesting case, absolutely, and much more the kind of thing I had in mind because people actually do it.

    Do you happen to know on what grounds Goldin-Meadow says it's not cognition? I vaguely recall seeing her give a talk once but I'm not familiar with the details.

    I do know gesture is fairly fundamental to communication (it's at least very tightly coupled if not constitutive). I also know you can train 10 month olds to use gesture to show the kind of language competence you wouldn't expect for another 6 months (work from Linda Smith; gestures are simpler motor acts than speech). Actually, as I type this I bet Linda's work on this is in the context of Goldin-Meadow's work, it may be a good place to have a look (another useful result from collaboration: G-M is not the only person with opinions and data about gesture and she might be wrong - worth checking the alternatives).

    Anyone reading know about gesture?

  6. But, Andrew, when I blog and generally when I write papers, I am, in the first place, arguing with philosophers about what matters to philosophers. So, I'm writing for them. When I give a talk for cognitive psychologists, I do something slightly different.

    I also believe in the value of collaboration and that psychologists have valuable information. Indeed, that information I have on neuromuscular blockade which I shared with you, which I think is very cool, is something I was told by a neuroscientist, Gary Dunbar at Central Michigan University, more than 15 years ago. It rattled around in the back of my head for years and years before I saw how it applies to some philosophical problem I was working on.

    Another neuroscientist friend of mine, David Coppola, told me about computed versus non-computed maps, which is part of a section of a forthcoming paper "Computation in Cognitive Science: It is not all about Turing-equivalent computation." Invited contribution to a special issue of Studies in the History and Philosophy of Science on computation in cognitive science. Edited by Mark Sprevak. Coppola also reviewed the text for me.

    I also have a paper co-authored with a mathematical logician, Mark Schlatter here at Centenary, on McCulloch and Pitts "A Logical Calculus".

    I don't think I need to be sold on the value of collaboration. Quite the contrary, I am always looking for scientists to answer questions. The literature is vast.

  7. So, here is a question about ecological psychology that has been rattling around in my brain. Of the Ames room, Gibson suggests that this is a kind of artificial byproduct of non-natural viewing conditions. And, he in fact, it seems to me, suggests that all visual illusions are like this.

    Ok. So, what of auditory illusions? These do not seem to be the product of restricted listening conditions. You can download some of these illusions here:

  8. Ken, on your first point: I guess what I'm worried about is what philosophers get up to when talking to other philosophers. Frankly I think that talking about these abstract toy examples makes philosophy run the risk of irrelevance; the field needs to remain connected to the rest of cognitive science, or else we're going to end up talking about such different things that no-one will listen. Instead, cognitive science is full of little ghettos, although I think they're fading a little as genuinely inter-disciplinary training produces people with specialisation + braod exposure to other approaches.

    My bias (and it is a bias) is to think that science is less at risk because, when done right, we are talking about things as they occur - we're nailed a little more firmly to the data. Philosophy, by being less attached to data, has always risked becoming irrelevant, and I think a lot of the kinds of examples people talk about in the embodied cognition literature especially are of this type.

    These specific examples worry me especially because a) I'm more aware of their problems and b) I'd quite like to see contributions to what I see are the real issues about embodiment from philosophy. I am, in general, a fan of the sort of intellectual rigour philosophers bring to the table and I want us to at least be tilting at the same windmills, regardless of exactly how we think they work.

    Also, I'm sorry if this is coming off as a personal attack, that's not my intention. I think some of my earlier comments looked a little like they were aimed at you in particular, sorry.

  9. OK, illusions:
    Of the Ames room, Gibson suggests that this is a kind of artificial byproduct of non-natural viewing conditions. And, he in fact, it seems to me, suggests that all visual illusions are like this.

    Ok. So, what of auditory illusions? These do not seem to be the product of restricted listening conditions.

    First, let me point you to this blog post where I think out loud about this in some more detail.

    The Gibsonian analysis always rests on information. To call something an illusion suggests a failure of correspondence between what is happening in the world and what is happening in perceptual experience. For Gibson, however, correspondence isn't the game. Visual illusions aren't so much errors as specific situations in which the visual information leads to an inconsistent experience.

    The Ames room is a good example: from one specific location, the static perspective structure leads to the true visual experience of a square room. The fact that the room isn't actually square is besides the point: you never ever get to 'look behind the curtain', you only ever have information. The Ames room is revealed as an "illusion" by moving from that one spot, which provides you with a new, equally correct visual experience based in new information. Gibson treats this more temporally extended experience as reflecting the more fundamental nature of perception.

    Auditory illusions are then amenable to the same analysis: what information is in the signal and what happens to it as we explore?

    Gibson was grumpy about illusions because he was rejecting the idea that perception is for gaining access to some 'veridical' knowledge of the world. Perception is about information, and information is about affordances, which means that information is task-specific, organism relative, etc. A simple example: when reaching for my coffee cup I'm not perceiving metric distance per se, I'm perceiving distance with respect to my reaching capabilities.

    There is the world; there is information that specifies the world; perception occurs on the basis of information, and perceptual experience is the true experience of perceiving that information. If you find an 'error', look to the information.

  10. Ok. I read the post and it helps me articulate my concern. You write

    "All visual illusions depend on a trick to work. The Ames Room, for example, is a trick of static perspective depth information that only works from one location; moving from there breaks the illusion, although it is compelling when you're in the right place. "

    Now, do you (or Gibson) want to go even further and claim that all illusions depend on a trick? If so, then what is the trick in the auditory illusions? (It's apparently not a constraint on movement, right?)

  11. Hi, Andrew, regarding thought experiments, it seems to me that you are not merely challenging me, but you are challenging the whole philosophical enterprise. But, philosophers are hugely mixed bag. Some seem to positively abhor scientific evidence, while others try pretty hard to keep themselves informed and make scientifically informed contributions. I happen to be in the camp that throws in with trying to be scientifically informed.

  12. I guess I am challenging philosophy in general. I really do think spherical cows are a problem, actually, and I've thought this for a long time. I guess I was just apologising in case I was being a dick :)

    All illusions do depend on a trick. They depend on the source providing the observer with information that tells you one thing at one point in space and time and something else at another, in a way that non-illusory things don't do. Gibson gets specification from optics because of the natural laws underpinning the creation of information about the world; things we consider illusions are local breakings of those ecological laws.

    What I mean be 'laws' comes from Turvey et al's reply to Fodor & Pylyshyn, which I've discussed and referenced here. If you're interested but can't access the paper let me know.

  13. But spherical cows are such fun to think about! They must roll down the fields and fart their way up the hills - oh and isn't global warming from methane very suggestive that cows are, in fact, spherical on some level?

    You are right, this does happen, and yet, I'm glad we have Judith Jarvis Thomson's violinist to help us with one of the hardest choices.

    The joy of imagining in strange and careful detail is one of the things I loved the most about doing philosophy; little wonder I spent time doing it, and can it just be that pleasure that makes me so sure it's sometimes done well? Some people who still do philosophy properly might be keen to think about good vs. bad thought-science. Me, I'd rather consider the insemination of the spherical cows.

  14. I don't mind the use of examples, per se, but they need to be apt. Imagine that someone is arguing that human beings are remarkably well adapted to life on earth. A counter-example might be that humans can't live (in the ancestral sense) on the majority of the planet (e.g., in the ocean or on the tops of mountains). But this counter-example is not apt. It misses the fact that humans are only well-suited to live on earth within certain parameters (e.g., dry land, fresh water). In other words, our suitability to life on earth has a specific scope. Examples that exceed that scope are not informative about the problem. I think this is the critical issue with some thought experiments. If those examples don't reflect what we know from psychological research, then they risk exceeding the scope of the problem and being irrelevant.

  15. Ok. So, even auditory illusions involve some trick. So, what is the trick in this illusion?

  16. In truth, I think I have been too agreeable in changing examples. I could, of course, tone down some of the more extreme features of the computer use example to make the main point.

    Suppose a person uses a computer to perform a task that she could not solve otherwise. It looks like hitting the button on the computer is necessary and sufficient for solving the task. So, it looks like, by Andrew's analysis, we have extended cognition.

    Now, you can say you don't care about this kind of case, but there is something going on in this case and a scientist might want to know about it. How should a scientist theorize about this? Should a scientist just ignore this? Just throw up her hands?

  17. I'm not 100% sure what's going on in the signal. There are two channels, each playing the same two-word sequence, at some temporal offset? Is that right? And different people hear different things?

    Is it ironic that I only hear that mp3 saying "no way" over and over? :)

    At a first pass, reading that website: the trick is at least partly the fact that the sound is designed to be ambiguous (no unique identity). This ambiguous signal then encounters a trained auditory system and the ambiguity is resolved in various ways, constrained by individual differences in the system's training.

    It's a trick because sounds generally aren't ambiguous like this; sound, like light, is lawfully structured by it's interactions with the world (say a vocal tract) and this kind of ambiguity is hard to come by in a real system lawfully producing sounds. Handing this signal to an auditory system trained with real sound might reveal a boundary condition or two but, like visual illusions, it's not clear what it tells you about speech perception in general.

    This is very much a first pass, though - I need to think about this some more. I'm not an expert in audition and I'm not clear on the details of how these were created. If anyone else wants to weigh in, please do!! But the analysis must always be rooted in the information: what spatial-temporal information is actually in the signal?

    I need to think some more, too, about your second post. Sabrina and I were throwing ideas around this morning but I need to firm them up.

  18. I think it's necessary to be specific about what the task is. I could use a computer to solve an equation or I could solve it myself. The outcome is the same in either case, but the task is different. And, the computer is obviously solving the equation differently than I would, so the processes are also divergent. Now, there are some things a computer can solve that I can't (like computing primes). When I press the button to execute the program, I am involved in some kind of stimulus - response relationship. I don't care by what process the computer arrives at the result. I have learned an association between button pressing and getting a desired outcome. To my mind, this is like training pigeons to peck at the blue light to get food. My own cognitive state in this task is pretty irrelevant. I could know a lot about math or nothing at all. I could be pressing the button because someone asked me to or because I wanted to press a different button but made a mistake. I could be drunk, sober, 2 years old or 100. I could get my cat to push the button. The task is accomplished equally well in all these cases. Furthermore, after my initial button pressing, I am irrelevant to this process. If the computer takes 10 minutes to provide a solution that's fine. I can do something else and the program keeps ticking away. This seems very different from using a pencil and paper to work out an algebra problem (something I certainly but under the "embodied cognition" banner). This task cannot be explained in terms of stimulus - response. And, it cannot be localised in either my head or the environment. The task only exists as a system. Furthermore, the success of this venture depends on my particular cognitive state - I must be at least pretty sober, I have to know something about math, I have to be sufficiently awake and free from other distractions. And, success depends on the environment - the pencil has to have lead, the paper needs to be large enough, it must be dry, there must be a surface for me to bear down on. Finally, if I walk away from this system, it ceases to function and if someone takes my pencil and paper away the system ceases to function. A single outcome (getting the answer to an algebra problem) can be arrived at using multiple tasks (a computer, working it out by hand). However, these tasks seem to be fundamentally different. I think it's perfectly reasonable to want an explanation for what's going on in the computer example, but it's not the same type of task. Now, whether this rules embodiment out completely, I don't know. I think that essentially all of our cognition is grounded in embodiment to some extent. I find the notion of something being purely cognitive puzzling.

  19. "It's a trick because sounds generally aren't ambiguous like this"

    But, I thought that the Gibsonian line was that the stimulus isn't ambiguous per se. It is only ambiguous *given artificial restrictions on movement* or something like that.

    I'm drawing this from Gibson's discussion pp. 166-8 of Gibson, (1979), regarding the trapezoids. There he seems to me to be saying that the reason the various trapezoids seem equivalent, or that there is some ambiguity, is that one is restricted to "peephole" vision. (Cf. especially, the first paragraph of p. 168).

    It seems to me that once one concedes that there is ambiguity in the stimulus and that there are individual differences, one is opening to door to the kinds of "presuppositions" that Gibson was against admitting.

  20. I also have another example of amodal completion at my blog. (It has video, which was kind of fun.) For my trick trick ... Claymation ...

  21. Hi, Sabethg,

    Here is what, from Andrew, I am taking aim at:

    "Step 1 is therefore do a task analysis, and identify what is required for an organism to do whatever it is you are trying to explain. Step 2 is to identify what is performing that work. If any of the necessary work is being done by something other than the brain, then it's surely game on as far as embodied cognition is concerned."

    I'm not sure of the way you guys want to work out the details, but I think the "logic of the case" is relatively straightforward. Maybe you can tell me how to do it.

    But, part of your reply here seems to be to pile on some additional conditions. You note, for example, that after the initial button pressing, you are irrelevant to the task. Agreed. But, Andrew did not have that as a condition earlier, right? You also seem to want to say something like, no sub-part of the task can be explained in terms of stimulus-response. But, Andrew did not have anything like that in his first pass, right?

    If you (or Andrew)want to abandon or refine what Andrew wrote initially, that's fine. Just let me know. After all, who dots all the i's and crosses all the t's the first time?

  22. But, I thought that the Gibsonian line was that the stimulus isn't ambiguous per se. It is only ambiguous *given artificial restrictions on movement* or something like that.
    You can make an ambiguous stimulus in a lab (a pattern of sound that doesn't specify anything). There's clearly an entire cottage industry around making stimuli just like this for research purposes. Gibson's ecological approach does, indeed, have problems dealing with evil psychology labs :)

    The stimulus can only specify if it is the result of lawful interactions with a regular world. These auditory illusions are sound but they do not specify anything - they contain no structure lawfully related to anything in particular.

    I'll leave Sabrina to defend her take.

  23. Hi Ken,

    There seems to be some confusion here. I'm weighing in on this particular post based on the computer example you provided and the general comments about the suitability of various examples or thought experiments.

    I haven't really been following what you and Andrew have been discussing elsewhere and Andrew and I don't necessarily agree about these things anyway. I was interested in thinking about the variety of ways in which these tasks differ to see if that clarified why the one example seemed less informative about embodiment.

    I'll have a read through the arguments you refer to (re: task analysis) and think about how/if that fits in with what I'm saying. In general, it's probably best to assume that Andrew and I are thinking about this stuff independently :)

  24. "You can make an ambiguous stimulus in a lab (a pattern of sound that doesn't specify anything)."

    A) Does Gibson say this anywhere?

    B) Ok. So, what about the ambiguity of the simple amodal completion example I posted at my blog? (The occluded thing could be either circular or pac-mannish.) It that a "lab made" ambiguity?

  25. Sabethg,
    I hope you do disagree with Andrew and conclude that I have a counterexample to his task analysis line! =)

  26. a) Gibson certainly knew it - he made some himself (mostly by accident as he tried not to - it's hard not to make a stimulus ambiguous, actually, when you try to control for things). His early slant perception work with texture gradients, for example, produced ambiguous information and people responded accordingly (ie with high variability).

    But a note: it's been 30 years since Gibson '79 and that's a lot of time for the details of lab psychology to have moved on from what Gibson was personally capable of building.

    b) I don't think there's any ambiguity in your amodal completion task, as I think I commented there. The visual information is perfectly coherent, but you can come to see it's incomplete as you explore the object. Not having access to everything right there and then is not the same as an illusion designed to never even offer the possibility of more complete/less ambiguous knowledge.

  27. So, what this seems to come down to is rejected the cases as they are artificial. The part about there being a "trick" does not seem to help.

    re b): I see your point. I should have thought of this. Thanks.

  28. Well, identifying the trick/problem with the information is the principled basis for rejecting the case as artificial. But yes.

  29. The cow joke is a classic and a good humor litmus test.

    Never though of applying it to philosophers but it works.

  30. A question re your computer performing a task. I am interested to understand how this is clarifying your arguments. What part of this process are you respectively describing as cognition? Surely Andrew isn't claiming that the work if the computer is somehow extended cognition? Surely neither is Ken claiming that there cognition ends with decision to press the key. Is that Andrews point? I for one know (subjectively) that on many occasions i have had to continue to consider the impact on my ongoing cognitive process when a computer hasn't responded in get way i expected (resulting in hitting the key repeatedly harder). Unlike Andrew i think great truths can be unearthed through flights of fancy but like him at some point i think they need to be testable. Either through ingenious experimental design or through examination of existing evidence. It seems to me that an interesting discussion would be to fully understand where you respectively think cognition starts and finishes.even in the computer example. that surely should be testable. I suspect it might also make faster headway than (fascinating) argument about whether an illusion is trick on correct perception or computation error in brain cognition (if i got that summary right its a miracle).

  31. Gentlepeople,

    as much as I enjoy these discussions and exchanges of "what if..." arguments, perhaps a meta-theoretical timeout is necessary.

    1.Ken, what would would be the empirical evidence that would convince you Andrew is right?
    2. Andrew, what would would be the empirical evidence that would convince you Ken is right?

    Let's do that experiment in a context of strong inference!

    If no experiment empirical evidence exists that can distinguish between the verisimilitude of your claims they are:

    1. Similar - unification awaits, science advances. Waves, particles, you know the drill.

    2. Trivial - much more work is needed in order to meet conditions of strong inference and head to head collisions theoretical predictions. Like the cosmic strings and branes are beautiful mathematical metaphors, but not a part of empirical science yet.

    I believe it is likely the second option best describes the case, except perhaps that snare theory is a bad example: It is a formal, axiomatic deductive theory, it does make predictions, but they are not testable with currently available measurement devices.

    Ecological psychology has a great advantage here, many explicit postulates (e.g. of direct perception, intentional dynamics, ecological physics) have been formulated... after Gibson, the field progressed. Most of them can be found in these works:

    Kugler, P. N., Shaw, R. E., Vincente, K. J., & Kinsella-Shaw, J. (1990). Inquiry into intentional systems I: Issues in ecological physics. Psychological Research, 52(2), 98–121. doi:10.1007/BF00877518

    Shaw, R., Flascher, O., & Mace, W. (1996). Dimensions of Event Perception. In Handbook of perception and action (Vol. 1, pp. 345–295). New York: Academic Press.

    Turvey, M. T., & Shaw, R. E. (1999). Ecological foundations of cognition: I. Symmetry and specificity of animal-environment systems. Nov-Dec 1999. Journal of Consciousness Studies, 6(11-12), 95–110.

    Shaw, R. E., & Turvey, M. T. (1999). Ecological foundations of cognition: II. Degrees of freedom and conserved quantities in animal-environment systems. Nov-Dec 1999. Journal of Consciousness Studies, 6(11-12), 111–123.

    Further development didn't stop there, e.g. see Chemero's 2009 book.

    I have yet to see a formal definition of internal representation, or cognition for that matter: Cognitivists should make a head start with *formalising* what is and what is not cognition, in what kind of system we may expect to observe cognitive phenomena and what the relevant levels of observation of that system are. Within this formalism, theories can compete for accuracy and precision of the phenomena it describes as relevant. This will immediately solve your prime number computing example.

    It may also solve the following: Is this droplet a cognitive agent? Are cognitive processes and computations going on inside its central processor? After all, it can find the shortest route through a maze!

    About gestures + cognition: That's not nearly radical enough imo. Dynamics of angular velocity of finger movements predicts emergence of insight in problem solving: See Stephen, Dixon, Isenhower (2009)

    1. Sorry some superfluous "woulds" in there, don't get lost!

      In addition a question, Ken, in what way is the coupling-constitution fallacy different from the "effect = structure" fallacy? That is, affirming the consequent? The latter applies to about all of empirical social science.

  32. Interesting discussion (though I couldn't get through all the comments). It may be true that philosophers should learn more of the empirical work, even though I think they are often underestimated on this. But there is a more general point. Empirical data is necessary for solving these problems, but it alone is not sufficient. That's is pretty much a truism by this point, I'd thought. The trick is seeing where exactly the data must (not just happens to) fall short of settling dispute, and to know what alternative is appropriate in that case. You might be interested in my paper "From cognition's location to the epistemology of its nature". It discusses one of the real cases that Andrew thinks is worthwhile, and shows why this and any data from it could never settle the dispute between extended and embedded cognition. Even the most careful experimentation can at best favor one causal hypothesis over another. To make the further leap to a constitutive claim, much more is needed.