Pages

Friday, 23 September 2011

Embodied solutions to neural delays: Information and Network Motifs

One of the bugbears of direct perception is the fact of neural delays. The transmission of signals through the nervous system takes time, and this means that there is a lag between something happening (at, say, the retina) and that event having consequences in cortex, let alone behaviour. In control theory terms, delays in a system can lead to instability in that system's behaviour as you are forced to make corrections that are then incorrect and must themselves be corrected.

It's typically suggested in psychology that these delays are compensated for via computational predictions; the nervous system 'perceives the present' by taking the lagged input and using it as the basis of a guess about what's going on now (e.g. Changizi et al, 2008, plus see this post from a while back). This is a problematic idea: if the perceptual control of actions is based on a guess compensating for a variable time lag, then the stability issue remains, not to mention the consequences for mis-predicting the future. Regardless, it's not really an option for a theory of direct perception, and I want to discuss a couple of options.

Prediction vs. Prospective Control
Prediction is a risky business. Not only is it computationally intensive and time-consuming in and of itself, errors can readily accumulate throughout the process leading to potentially catastrophic failures of the system. The perception-action system is particularly vulnerable to this, as it typically operates in a 'just enough, and just in time' fashion. Recent empirical work has begun investigating the possibility of prospective control. This is a more robust control strategy that entails coupling yourself to currently available information to control action so as to achieve a future goal. Of course, it requires the availability of useful information.

Montagne et al (1999) tested a way to identify which strategy is being used in an interception task. Intercepting a moving object means getting yourself to the right place at the right time. Montagne et al created a simple interception task in which the object to be intercepted started in one of three locations and smoothly moved to a central location.They also varied the initial location of the person's hand, such that it was either at the final location of the target already, or off to one side. If people were predicting, they should simply move their hand to the final location; if they were prospectively coupled to information about the target, they should exhibit movement reversals. (The geometry of the set up meant that the hypothesised information varied over time for certain combinations of initial hand and target location so that using it would make you overshoot and have to correct). They then went looking for (and found) the movement reversal signature, confirming prospective control (see Figure 1).

Figure 1. Movement reversals in interception (Montagne et al, 1999)

Neural architecture and zero-phase lags
Prospective control is showing up in all kinds of perception-action tasks. However the basic problem of neural delays persists: if you couple yourself to an information variable, it still takes time for that information to be available to control action. Perhaps you're still 'perceiving-the-present' but just basing your behaviour on a predicted value for the information for prospective control. How problematic are these lags?

It's possible that they aren't actually much of a problem, thanks to an intriguing embodied solution to the problem which I came across as I work through Networks of the Brain (Sporns, 2010). Vicente et al (2008) investigated how you can establish synchronous behaviour between widely separated regions of cortex via a simple motif (Milo et al, 2002; see also Sporns, 2010). Motifs are simple patterns of connectivity between nodes of a network; for example, three nodes can be connected in 13 distinct ways, and each of these is a motif. These form a kind of alphabet, from which you can build networks with particular properties. Vicente et al demonstrated that a dual dyad motif promotes self-organisation of the system into synchronous behaviour with no lag, even if there are significant conduction delays present between the nodes (see Figure 2). This solution is embodied because the solution doesn't require computation; rather, the system solves the problem by being built in a particular way.

Figure 2. The α and β neurons begin out of sync but with a few spikes remove the phase lag (fFrom Vicente et al, 2008)
Vicente et al demonstrated that this network architecture promotes this type of self-organisation between individual neurons and populations, over a wide range of latencies. The system doesn't entirely eliminate lag, because it still takes time for the synchronised behaviour to emerge after a perturbation; however, the time taken for the system to respond is less than the time you get if you just add up the individual conduction times between the neurons. Given that this is how the delays typically cited by people like Changizi et al (2008) are computed, this paper is an important contribution towards identifying ways to compute the actual lags in the system.

Reverberation vs. Serial Order
The motif solution is interesting, because it changes the focus a little for the way the problem is posed. Clearly, in a system that actually exhibits conduction delays, you cannot instantly ping one end of a connection and get a zero-phase lag, no delay response at the other end. But the motif solution isn't saying this: the fast self-organisation of synchronous behaviour emerges once the system in question is up and running. It takes a finite amount of time for information to zing around the network from the moment you 'switch it on', but once it's going, the network that has been built now reverberates as a whole in response to events in any node. Typically, lag estimates are computed by just adding up the conduction times for all the neurons involved in a linear, serial event; for responding to stimulation on your toe, for instance, you add up the conduction times for the long axons that synapse in the spinal cord and the axons that connect from the spine to cortex (plus time for the slower chemical transmission process at each synapse). Same for the supposed lag between the eye and visual cortex.

The analogy that came to mind is what happens when you switch a light bulb on in a dark room. Light leaves the bulb and streams out in all directions; it travels until it hits a surface, when it is reflected according to the relevant laws. Eventually, some of this light passes through the point in the room occupied by your eye; light at this point of observation comes from all over the room (and hence you can see all over the room) and continually flows through that point. The room is entirely filled but only after a measurable amount of time; it seems instantaneous because light travels so fast.

Once the room is filled, you can simply move through this optic array and the light you sample smoothly alters in response. Critically, light from objects at different distances behaves appropriately; even though the light had to travel different distances and therefore took different times, there is no lag - objects further away don't take longer to 'update' a given point of observation. Why? Because once the room is filled, the light coming from those two surfaces has already covered the distances to the point of observation and is simply being replaced with a continuous stream of light that's done the same. The lag problem has been solved because it happened during the time when the room was filling with light.

I think that the motif solution is a bit like this. The lags the motif is fixing exist, but only until the system is up and running. Once the system is going, information continually flows through, say, a given point of observation and the structure at that point implements the lag correction already. Perception, action and cognition are continuous and are built to take advantage of what that continuity affords.

Summary
Lags and delays in the system are typically a problem which is solved computationally, via internal models which take an input at time t-delay and use that input to predict what the input will be at time t. But prediction is a risky business because if your internal model is incorrect, you will have your face eaten by that bear (or something similar). Solving these problems via internal computational models is also not a legal move for the ecological approach, so the question is, how do we solve it? What else might it be?

In behavioural studies, we make the distinction between prediction and prospective control. The latter typically predicts a pattern such as movement reversals, because the control strategy is to be coupled to an information variable (typically a higher order variable that specifies the property of the world you're interested in) and your behaviour follows the behaviour of that variable over time; a little like the Watts steam governor. People have gone looking for this signature in a variety of interception tasks (typically thought to require prediction); in general, we only see evidence for prospective control (the outfielder problem is another great example of this). There's much more to do here: the eye movement literature, for example, tends to still talk mostly about predictive mechanisms to explain successful tracking behaviour. I hope to get into this empirically over the next few years.

Neurally, you can also address the delay issue without resorting to computations that implement prediction. Network motifs are architectural features that shorten effective path lengths and create functionally related clusters; in short, they promote 'small world' architecture (Watts & Strogatz, 1998). Those clusters, or hubs (like the γ node above) are essential to maintain network-wide information flow; however, they work by facilitating flow in the network, and not by implementing special computations:
Their influence derives from their capacity to connect across much of the brain and promote functional integration, not from special intrinsic processing power or capacity for "decision making". Hubs enable and facilitate integrative processes, but they do not represent their outcome, which instead is found in the distributed and global dynamics of the brain.
Sporns, 2010, p. 125.
As the Watts governor continues to remind us, there are often dynamical solutions to supposedly algorithmic control problems that work via specifically built devices coupled to relevant information, and these solutions don't implement the algorithmic description. This is a different way to think about, for example. the delays and problems the nervous system actually faces, and the resources it has available to address them. These resources aren't computational: they are architectural and informational. They are, in fact, embodied.

References
Changizi, M. A., Hsieh, A., Nijhawan, R., Kanai, R., & Shimojo, S. (2008). Perceiving-the-present and a systematization of illusions. Cognitive Science, 32(3), 459-503. DOI  Download

Montagne, G., Durey, A., Bootsma, R., & Laurent, M. (1999). Movement reversals in ball catching Experimental Brain Research, 129 (1), 87-92 DOI: 10.1007/s002210050939 Download

Sporns, O. (2010) Networks of the Brain. Cambridge, MA: MIT Press. Amazon.co.uk

Vicente, R., Gollo, L., Mirasso, C., Fischer, I., & Pipa, G. (2008). Dynamical relaying can yield zero time lag neuronal synchrony despite long conduction delays Proceedings of the National Academy of Sciences, 105 (44), 17157-17162 DOI: 10.1073/pnas.0809353105 Download 

Watts, D.J. & Strogatz, S. H. (1998). Collective dynamics of "small-world" networks. Nature, 393, 440-442. DOI Download

26 comments:

  1. I'm not sure I understand the distinction between predictive control and prospective control - can you elaborate a bit more on that one?

    ReplyDelete
  2. Prediction means acting according to what you think will happen in the future, based on what you see now. Prospective control means acting according to what you see now in such a way that the future state happens.

    Take the outfielder problem; trying to catch a fly ball.

    1. The prediction account says you look at the early stages of the trajectory and mentally extrapolate the path based on an internal model of the physics of projectile motion. You figure out where the ball is going to land, and start heading in the direction.

    2. The prospective control say you continuously perceive something about the motion of the ball, and couple your movement to that motion so that by following it, you end up in the right place at the right time. In fly-balls, you can either move so as to make the ball appear to trace a linear path, rather than the parabola it actually makes (Linear Optical Trajectory) or you can move at the right speed to cancel out the acceleration of the ball in the optics (Optical Acceleration Cancellation). Successfully moving according to either of those rules happens to get you to the right place at the right time to intercept the ball; but you don't ever 'know' anything about the future path of the ball.

    I should blog those in some detail sometime.

    ReplyDelete
  3. I'm not quite clear on some of your terminology. So I'll just state what I believe is most likely happening.

    The information is not all delayed. Some is available fairly quickly. For example, if I am blindfolded but facing a tree, when the blindfold is removed I my perception of the basic tree is very rapid, while depth perception is noticeably delayed.

    My assumption on the overall problem, is that we begin acting on the information that is available. As more information (often more precise and more detailed information) comes in, we use that additional information to adjust our action.

    In the case of the outfielder and the fly ball, my assumption is that the outfielder is directly perceiving where to place his hand to catch the ball. We need to remember that perceptual learning is part of direct perception (at least on my reading). And the training of the outfielder would enhance his ability to perceive where to place his hand. As the early information comes in, he picks up a ball park location (oops, confusing term when done in a ball park). Then, as additional and more precise information becomes available, he can narrow down where he should place his hand (or mitt). So he begins motion on the early approximate information, and refines his motion as the details of a more precise location become available.

    ReplyDelete
  4. when the blindfold is removed I my perception of the basic tree is very rapid, while depth perception is noticeably delayed
    You need to elaborate on this; it's not like everything appears on a plane and then moves into 3D position. What is true that your depth perception will probably need to be calibrated; but that's a different story.

    In the case of the outfielder and the fly ball, my assumption is that the outfielder is directly perceiving where to place his hand to catch the ball.
    Actually, this isn't correct. Until you get to where the ball is going to land and can see it clearly, there is no information about where to place your hand to directly perceive.

    As the early information comes in, he picks up a ball park location (oops, confusing term when done in a ball park). Then, as additional and more precise information becomes available, he can narrow down where he should place his hand (or mitt).
    Again, not quite. The LOT/OAC literature suggests that outfielder's aren't planning their routes and refining it, those routes are emerging in real time as they move according to one or other strategy. The 'future path' of the ball isn't being used.

    You're right about learning, of course. It takes time and experience to learn which optical structure will guide which behaviour.

    The essential difference is whether the outfielder is moving according to what's going on now, or what they predict will happen in the future. The evidence all suggests the former, not the latter; that's prospective control.

    ReplyDelete
  5. I can see how this works in some cases, where lots of dynamic information is available, but are you suggesting that all perception action situations that appear to be using predictive control are just actually prospective control? Seems like a tough sell to me...

    ReplyDelete
  6. I'm saying that every case that's been looked at so far has been prospective rather than predictive, and that this strategy is much more robust than prediction anyway. Prospective control requires the right information, and if you don't look for the right information you might think it's not possible. For instance, in the eye tracking literature, people seem to only ever talk about prediction because they are caught up in the serial order of events that creates lags to be overcome. But you go looking for prospection, maybe you'd find it there too.

    ReplyDelete
  7. When I have sat through talks on 'prospective' vs. 'predictive' control, I have always thought that a firm distinction between the two seemed either contrived or poorly labeled, or both. Part of the difference is what you make of behavioral 'errors' (or non-ideal adjustments) and what you make of external perturbations.

    There was a BBS article back in 2004 arguing that the brain works like a "Kalman Filters", that the brain gets signals, does a bunch of computations on autopilot, then compares with the conditions when the computations are over and adjusts - this is about as 'predictive control' oriented as it gets. One problem with this metaphor is that any difference between the system prediction and the final body position (relative to the world) is considered "process noise". This assumes that such discrepancies are not another form of information, and that such discrepancies would never be predictable based on a better-attuned perceptual system. Scott Jordan noted similar limitations, in his comment.

    I suspect that much (but certainly not all) of the difference between 'predictive' and 'prospective' approaches, is that prospective approaches assume that perfect behavior-environment matching WAS possible, based on available information, and then searches for the information that would have allowed perfect behavior. In contrast, 'predictive' approaches assume that perfect behavior-environment matching is never possible, and try instead to determine how accurate a person could get based on whatever semi-accurate cues the researchers can easily come up with. Note though, that both approaches could certainly go by the word 'predictive' in the lay usage of that term. Ick.

    Note also, that a person with limited time or perceptual resources must simply start acting (as nwricket noted), using any available cues. Presumably, they will switch to perfect specifiers in the course action... if they have the time, can move as needed to get to the information, and have developed so as to be able to extract it. This suggests a very blurry line in practice.

    Hopefully that was coherent.

    Note: The target article on Kalman Filters also neglected the continuous nature of adjustment, and neglected non-brain mechanisms of adjustment... but that is a longer discussion.

    ReplyDelete
  8. I suspect that much (but certainly not all) of the difference between 'predictive' and 'prospective' approaches, is that prospective approaches assume that perfect behavior-environment matching WAS possible, based on available information, and then searches for the information that would have allowed perfect behavior.
    This isn't really the case. Take catching a fly ball: the logic goes 'people can catch fly balls. What structure is available in the optic array that would support this behaviour if used?'. You then go looking at the geometry and find the LOT and OAC strategies, and you then go looking to see if people use this information.

    The difference is fairly straightforward: prediction means basing current behaviour on what you think the future will look like. Prospective control is basing current behaviour on currently available information that relates meaningfully to the future. Two very different things.

    ReplyDelete
  9. I think this topic of prospective vs. predictive control would make a great topic for a lengthier blog (without using the fly ball example!). From the description you put forward in your final paragraph, one could presumably re-parse prospective control as equaling prediction that takes into account of situation context/prior learned association of how something looks and how it will act (i.e. something metal will be heavy).

    ReplyDelete
  10. Why not the fly ball example? It's one of the best and clearest.

    ReplyDelete
  11. I agree with Gavin that a longer post on this would be useful, and that we need non-fly ball examples (not that the fly ball should be excluded, but that many examples are good). One problem is that one can screw up in the end using either predictive or prospective measures (as unaccounted for variables can exist in either case). Another problem is that one could describe action based on any variables (specifiers or cues) by saying that you "base current behavior on what you think the future will look like." Prospective control doesn't rule out thinking, it just shows that thinking is not necessary... and if acting is a form of thinking, then that whole way of distinguishing falls apart. That is, "to back up when the optic speed accelerates" IS "to think the ball will land behind you" or better yet "to know the ball will land behind you"(if current conditions remain stable).

    Don't get me wrong, I definitely agree that people are doing one thing more often than the other thing. I am merely suspicious that about the labels, and suspicious that the distinction will stay so clean outside of ideal cases.

    It is entirely reasonable and logical that the examples most commonly investigated are relatively ideal cases (i.e. fly ball catching is so well known, because the dichotomy seems to make good sense in that context).

    ReplyDelete
  12. My request for a non fly ball example was driven only by the fact that I don't know much about it from a science or real-life point of view. But it's also helpful for me to understand something through similarities across situations - constantly repeating the fly ball example just makes the prospective/predictive distinction seem rather unimportant and esoteric

    ReplyDelete
  13. Maybe I'll start with the fly ball stuff, just because it's been studied in detail. But Bill Warren has things on locomotion and steering that fall within the same framework.

    I guess I know what I'm doing over the next few weeks! :)

    ReplyDelete
  14. I can think of a better example than the fly-ball, if you'll permit me to use a mechanical analogy (and another explosive-filled on at that). Suppose you have an anti-aircraft missile launcher. You have two ways to try & hit the plane with a missile:

    The 'predictive' method: you have a computer that estimates the aircraft's speed & position, and launches the missile straight toward the point where the airplane will be in 30 seconds, assuming the airplane flies in a straight line. (Maybe you can even re-compute & adjust trajectory every so often ... would that still count as predictive? See my question below.) Your missile basically flies in a straight line.

    The 'prospective method:' You use the computer to make the missile point at the airplane at all times. The missile launches straight up at the airplane's location at the time of launch, but it soon arches over & approaches the airplane from the rear.

    The 'prospective' method is more robust, because you are always pointing straight for the airplane, so you are never on the 'wrong' path*, pointed at some place that the airplane might never be. It's simpler computationally than predicting the airplane's future position. You don't have to make any 'forward model' about how the airplane will behave. It's also more agnostic about the airplane's behavior because of this: if the airplane 'jinks' the missile will still track it.

    *(The prospective method will take longer to hit the airplane. The missile travels along a curve instead of going in a straight line. This is because we've optimized for robustness, not efficiency. This actually suggests a minimization principle -- not sure what exactly to minimize yet.)

    So Andrew, what I'd like to hear from you is whether Gibsonian EP uses 'predictive' in the sense of having updatable predictions, or only a single initial prediction that cannot be revised. You seem to me to be assuming the latter, which would be a rather unfair comparison since the 'prospective' method is allowed to update its information continuously. It would only make sense to update one's predictive model too.

    And in the end, if I understand correctly, the question boils down to this: which method, or combination of methods, do humans & animals use when confronted with a new task, and when confronted with a task they've practiced? In other words, as we learn, do we form models of the world, or do we form algorithms that act on information?

    ReplyDelete
  15. BTW, I think Bayesian statistics probably play an important role in judgement, but I also think that people have wildly misinformed priors, which results in lots of goofy behavior, accidents, etc. I also suspect that people calculate expectation values (ie, weighting probabilities of outcomes by the desirability of the outcome) incorrectly & that explains poor judgment & risk-taking behavior.

    ReplyDelete
  16. I like the missile example, it's on the right track (no pun intended, honest :). Some devilish details:

    Gibson doesn't rely on predictions; but of course you can update your predictions. The question is always then 'on what basis?' and 'how quickly can you update'?

    Crudely, you could think of prospective control as a model that is updated at every possible instant; however, if you have a system geared to an environmental variable, you are no longer predicting anything and there's nothing being updated except your behaviour. The model is an unnecessary extra step; the updating is reflected in behaviour immediately. So to a Gibsonian, the real question about models is 'why bother?'

    I like Bayesian methods for analysis, but they are not a mechanism for cognitive activity; it's the same old model and representation idea, just with fancier statistics. There's nothing new there.

    We don't form models. To the extent we implement algorithms, we aren't doing the computational version, we are acting in an embodied manner that can be described by the algorithm, if that makes sense. It's always vitally important to keep the thing and the description of the thing separate!

    ReplyDelete
  17. I love puns! (I once drew a diagram of a fish for an optics problem I was teaching, (very similar to the 'size of the image in the mirror' issue) but I made the fish too big compared to the fishbowl. So I told my students, 'well, I guess it's not to *scale.* Took them a few seconds to catch on.)

    I suppose that a 'prospective' method is just the motor-control implementation of a 'heuristic' problem-solving approach. There seems to be some research on heuristic problem solving (ie, there's a wikipedia page, hehe).

    You could update predictions on the same information that is available to your prospective method. You can update it as quickly as your brain will allow. I think that would only be fair. The point is to let the merit of the procedures show themselves on the actual problems that they are supposed to cope with, not on a constrained situation. Give them both the same resources & see which one is the fittest! Or more tellingly, predict observable differences in behavior & good look at which one is closer to real data. (Sounds like that's been done for LOC & OAC (sp?) methods for the outfielder problem. That's good science.)

    Bayes: perhaps. Maybe what I should say instead is that we obviously have some way of (implicitly) judging the maximally likely value of something (say an affordance) based on incompletely-specifying information & things we learned from previous experience. (I guess the real question is what form the previous learning takes.) This kind of exercise could be very useful in workplace safety. For instance, an electrician at our school was recently electrocuted by working on a live circuit which he could have easily turned off. The trouble is, electrical hazards don't look threatening, & we judge things primarily be how threatening they look, even if we 'know' (model) how threatening they are.

    Sure. What I meant by an algorithm was something distinct from a 'forward model,' because it is action-oriented, based on simple rules, and it is guaranteed to come up with an (approximate) answer to a specified degree of precision after a finite amount of time. I'm trying to translate between computer science/physics jargon & cognitive psychology terminology.

    ReplyDelete
    Replies
    1. I can´t help but think that even in a predictive missile with continuous update you would get a different trajectory than that of a prospective missile. The central difference when I try visualising it is that a predictive missile would calculate the geographical point of impact and update this point continuously (it would probably result, ultimately in a fairly straight line, maybe with a small curvature depending on initial miscalculation). A prospective missile then would have a deep curvature since it is adjusting, in real time, its position relative to the airplane. Here, importantly, the point of impact can´t really be said to be the "goal" -it is not necessarily based on a computation-, more so, it is dynamically updating its goal through continuous and environmentally determined influences.

      The contrast would then be, the predictive missile more so relies on internal computation (a traditional representationalist perspective), whereas the prospective missile relies more so on environmental demands with no necessary need to compute. What we more so rely on becomes the heart of the issue here I believe.

      I say ´more so´ because, I am not saying there is nothing in the brain, I am just saying that we need to consider that not _all of it_ is in the brain -or perhaps, necessarily, in the brain- and if we can go without the assumption of representations, it makes for a more scientifically sound explanation.

      Delete
    2. This sounds about right. In fact, one of the key predictions of predictive vs prospective control is curved movement paths from the latter (up to and including movement reversals). These curved paths are commonly seen in interception tasks, hence we say the control is prospective. The earliest reference I've seen on this is

      Montagne, G., Laurent, M., Durey, A. & Bootsma, R. (1999) Movement reversals in ball catching. Exp. Brain Res. 129, 87–92. Download

      Delete
  18. Bayes: perhaps. Maybe what I should say instead is that we obviously have some way of (implicitly) judging the maximally likely value of something (say an affordance) based on incompletely-specifying information & things we learned from previous experience.
    Maybe. You can act like you're judging maximal likelihood while not actually doing that. I don't know of anyone explicitly pitting Bayesian hypotheses against ecological ones. Maybe that's another reason I need to brush up on my Bayes' Theorem.

    ReplyDelete
  19. I still have no clue about what the real difference is between predictive and prospective. In the case of the fly ball or the missile, both tasks would be impossible without integrating information over some time interval. Reacting to the momentary position with no other information makes no sense. On the other hand it would be absurd to think that a system would predict the future, begin acting, and then ignore all intervening information before the action completes. In the case of the missile, it seems in the first case the missile moves in a straight line and then in the second case we have a missile that can fly around and catch the plane. How about the case of a spear thrower tracking an animal. Honestly, considering only plausible solutions, it's obvious that 1) prediction without update is absurd and 2) using only instantaneous information without integrating past information is absord.

    ReplyDelete
    Replies
    1. both tasks would be impossible without integrating information over some time interval
      True, but in the prospective case, that integrated information isn't stored anywhere. The integration is implemented as movement, and where you are now reflects where you were then only because you got here from there and not somewhere else. So you aren't integrating information and using that integrated signal at time t; but you are effectively integrating information by virtue of the way you move.

      1) prediction without update is absurd
      Actually it's 'ballistic' and while it's not the most stable solution to problems it has it's place. That said there is evidence that not even very fast actions are truly ballistic.

      and 2) using only instantaneous information without integrating past information is absord.
      The issue is how the integration is implemented. Just because one way to describe what happens is to say a signal has been integrated over time and is now being used, doesn't mean there's actually an integration process; perhaps it's a side effect of, for instance, the way you move. This is the issue of mistaking your third person analysis for what the first person solution actually is; Louise Barrett talks about this in the context of anthropomorphizing your organism.

      Delete
    2. What Andrew said.

      Integration is absolutely not required, if by "integration" you mean that the runner's brain has to do what a mathematician's brain does when solving for the integer of a function, or what a computer does when it estimates the integer with a curve-fitting function.

      Of course, close-to-ideal behavior requires that the organism conform to the constraints one could predict by determining the integral. But there are many ways to do that, and it turns out, when you investigate, that there are even multiple ways of doing that which in no way require determining the integral.

      Delete
  20. Matt, as I said the traditionally used labels are a bit off. If you want to simplify the discussion as much as possible: The Ecological argument is, in the first instance, about structured ambient energy. If the analysis of the that structure is correct, then you can do an awful lot of complex motor coordination with just perception. And by "just", I mean, without anything invoked by the modern use of the term "cognitive".

    So, if by "prediction" you mean "moving in a particular way", and if by "integrating past information" you mean "taking into account current bodily trajectory" then your two points are spot on. Yes, we move in a particular way, while adjusting based on the feedback our movement produces; and yes, we take into account the way we are currently moving when we make use of new (supposedly) instantaneous information. All that takes place within a "prospective" perception-action model.

    On the other hand, if you meant something else by "predictive" and "integrating", then that just emphasizes that there is something at stake in the distinction.

    ReplyDelete
  21. It seems that the distinction is something like this:

    representational: perception -> the brain modifies its world model -> outputs instructions to the body

    embodied: perception -> the brain's momentary state changes "automatically" -> instructions to the body are "automatically" output

    Does that capture the distinction (or partially)?

    ReplyDelete
    Replies
    1. The main distinction in this task is actually that for representations, you move as a function of your model. Informationally, you move as a function of the current information and it's relation to what you're trying to make it look like.

      Delete