Karl Friston has spent a lot of time recently developing the free energy principle framework as a way to explain life, behaviour and cognition; you know, biology, and it's become the cool kid on the block in fairly record time.
Crudely, the basic idea of the FEP is that living organisms need to operate within a range for a given process, or else they will be malfunctioning to some extent and might suffer injury or death. Being within the relevant range across all your processes means you are alive and doing well, and so for an organism that has made it this far in evolution those states must be highly probable. Being outside those ranges is therefore less probable, and so if you find yourself outside a range you will be surprised. Your job as a self-sustaining organism can therefore be described as 'work to minimise surprise'.
There is a problem with this formalisation though. The information-theoretic term that formalise 'surprise' is not a thing that any organism can access, so you can't work to control it. Luckily, there is another formal quantity, free energy, that is related to surprise and is always higher than surprise. Free energy is therefore the upper bound on surprise and minimising that upper bound can reduce surprise as well.
All this is currently implemented in an inferential, Bayesian framework that aligns, at least on the surface, with modern representational cognitive science. Andy Clark thinks this is the future, and Jakob Howhy has worked hard to nail this connection down so it won't move. If this is all right, and if the FEP is being successful, perhaps non-representational, non-inferential accounts like ours are going to lose.
A recent paper (Bruineberg, Kiverstein & Rietveld (2016) tries to wedge the FEP and Bayesian psychology apart to allow room for an ecological/enactivist take on the FEP. To be honest, I found the paper a little underwhelming, but it did get me thinking about things, and two questions have emerged.
Before we worry about an ecological account of the FEP, we need to know 1) whether such a thing makes any sense and 2) whether it adds anything new to the proceedings. All comments welcome - these are genuine questions and if there are answers we would love to know.
Does the FEP Make Sense?
The argument is that free energy is detectable by the organism, which makes it (in principle) something the organism can control, and that doing so leads to life as we know it.
Ecologically, this means that whatever the term 'free energy' is pointing to, it needs to be a real thing that can create perceptual information. There must be real dynamical parts and processes interacting with energy arrays to create specifying kinematic patterns that an organism can detect and use to perceive the thing described by free energy. There must, at some point, be a real thing that some light can bloody bounce off (or sound waves, or whatever). What does the free energy principle think that thing is? Without this, the FEP is of no use to an organism, at least as far as cognition and behaviour go.
Does the FEP Add Anything New?
I've been working up a post drafting a free energy description of the process of ecological perceptual learning, and while it seems to work I have yet to learn anything I didn't already know. Right now, the FEP seems like a useful descriptive framework, but not an explanatory one. So what does the FEP explain that we didn't already understand?
These questions pop out of the mechanistic mind set we find ourselves in these days. We're working on showing how our flashy theory is implemented using actual bits and pieces of stuff, and we want to know how your flashy theory does it. What are the real parts and processes in the FEP, and what do they explain rather than describe?
Thursday, 17 November 2016
Subscribe to:
Post Comments (Atom)
Hi Andrew,
ReplyDeleteThanks for the blog. Your questions are, I think, a starting point for potentially a longer conversation. I’ll just respond to your main two questions:
Does the FEP Make Sense?
Let me be blunt. Light does not bounce off of free-energy and if you think it needs to do in order for it to make sense, then this is not your kind of principle and we are not your kind of ecological psychologists (related to your earlier blogposts on affordances). Formally: free-energy is defined in terms of the KL-divergence between a variational density and generative model mapping from causes to sensory consequences. You can rewrite this mathematically in a number of ways that make the different aspects of the framework clear (see for instance the wikipedia article on the free-energy principle). In short: in varying the variational density free-energy varies as well and the minimum of free-energy corresponds to the cause that best accomodates the sensory information given my model. In acting, I change the environment, that in turn changes the sensory input, which in turn changes the free-energy. Most commonly, this is implemented in terms of recurrent neural networks and provides a computational gloss on the intrinsic recurrent dynamics of the brain and how that relates to its niche.
If you want to get into this:
Here is a great tutorial paper (not by Friston) for getting started on the maths:
http://www.sciencedirect.com/science/article/pii/S0022249615000759
Now, both my 2014 Frontiers paper and the 2016 Synthese papers are full with arguments why FEP can and needs to be understood in our particular ecological-enactive way. I don’t think it makes sense to reiterate them here at the moment. I can recommend the following two papers to get started on the (radical) embodied version:
Embodied inference:
http://www.fil.ion.ucl.ac.uk/~karl/Embodied%20Inference.pdf
Life as we know it:
http://rsif.royalsocietypublishing.org/content/10/86/20130475
Does the FEP Add Anything New?
Unless you tell me right now how selective openness to affordance works mechanistically I, for one, still have some questions left. Part of the appeal of FEP is unification: within the neurosciences, but it also offers a systematic story about how the internal dynamics of the brain is adaptively coupled to the dynamics of the body and the environment. On a sidenote: FEP is fully mechanistic.
On a more critical note: I haven’t ever heard an ecological psychologist say something interesting about the brain and where it might be good for (to be fair: neither have I heard much good from neuroscientist about structures in the environment). Your insistence on light bouncing off of stuff is not helping. There is some story to be told about how animals are using that light in adaptively coordinating with their environment (without presupposing goals that come out of nowhere). I might very well be wrong about eco-enactive FEP, but I strongly believe these are questions that need to be raised.
Jelle:
ReplyDeleteI think you've missed the point about 'light bouncing off stuff'. The idea is this: to be ecologically interesting a thing must be perceivable. To be perceivable, it must create information. And to create information, at some point the dynamical thing to be perceived must interact with an energy medium (e.g. light) to create a kinematic pattern in that energy medium and make it an array (optic, acoustic, haptic, etc).
If free energy does not describe a thing that can create information, then (ecologically at least) it cannot be perceived and it cannot be a thing an organism actively works to do anything to. It is, a best, a functional description of a process. What I want to know is how actually something works, and functional descriptions cannot deliver this.
So, relatedly - if FEP cannot be perceived because it creates no information it cannot feature in a mechanistic account of behaviour. What are the real parts and processes in the FEP model? What actual things out there are the various terms representing in the maths?
DeleteSomething creates information. This information 'travels' in the energy medium. This information then has to be 'picked up'. So far so good. However, something has to be doing the picking up, right? And this something cannot in turn be described in terms of information creation and pickup (lest we have a circular, or, worse, homonculur explanation).
DeleteWhen Jelle says that he has never heard an ecological psychologist say anything interesting about the brain, this seems to me to be a question regarding the 'pick-up' (better: selective openness, for I do not start throwing all objects around me, although most of them certainly are throwable) of information/affordances, that is, a question not answerable in terms of information creation and pickup.
As far as I understand it the FEP is going to be a framework within which one can explain selective openness, not itself an instance of selective openness (i.e., things are not free-energy-able).
Free energy is supposedly available to the organism. This is the advantage it has over surprisal per se. So how is it available? What information does the thing described by free energy create? What is that thing?
DeleteAndrew, I think it makes sense to take a look at the Bogacsz paper, section 2.3. There you find an implementation of free-energy minimization in terms of recurrent neural networks. FEP provides functional accounts with the aim of complementing them with mechanistic accounts. Mainly in neural networks, but conceptually perfectly extendable to the body and to the environment.
ReplyDeleteI understand your insistence to really understand how stuff works (I think I share that too), but insistence on information is not helping in this particular context. If an ecological psychologist studies attunement (I think free-energy is a measure of (dis)attunement), then insisting that light needs to bounce off of disattunement in order it to fit into your account just seems misguided.
For me the central question is selective sensitivity to affordances. You might insist on a dispositional account, but you still need to account for context-sensitive dispositions. Dispositions to me seem exactly the kind of functional descriptions that you would want to give mechanistic accounts of. Light is not bouncing off of dispositions: accounting for dispositions require an understanding of the internal dynamics of the animal. I think this is Jasper’s point as well is it not Jasper?
Many thoughts; first
DeleteDispositions to me seem exactly the kind of functional descriptions that you would want to give mechanistic accounts of. Light is not bouncing off of dispositions: accounting for dispositions require an understanding of the internal dynamics of the animal.
Actually, I'm working through 'Affordances Explained' which cashes out the dispositional argument in excellent detail. I have a blog post coming soon but I've literally just read a bit that might be useful here.
A full accounting of how a disposition comes to manifest does indeed require a specification of the complementary properties of the organism. But the account of the disposition itself (here, the affordance) resides entirely within the object X.
Dispositional properties of objects are constituted by their anchoring properties. The dispositional affordance 'graspability' is constituted by having surfaces placed in a certain relation to each other (size, distance, orientation, etc; note these relations are not the affordance, which is the property constituted by them). Here's the thing - light can bounce off surfaces.
So affordances cashed out as dispositions can absolutely tell a story that allows real parts and processes to interact with energy media to create information, making them, in principle, perceivable.
I understand your insistence to really understand how stuff works (I think I share that too), but insistence on information is not helping in this particular context. If an ecological psychologist studies attunement (I think free-energy is a measure of (dis)attunement), then insisting that light needs to bounce off of disattunement in order it to fit into your account just seems misguided.
DeleteSee, this makes no sense to me, and it's a feature of the work you, Erik and Julian do that utterly baffles me. You cannot be an ecological psychologist without insisting on a central role for information. We don't study 'attunement'; that's a metaphor. We study 'perception' and perception demands information.
The upshot remains, if you want '(dis)attunment' to be a real thing organisms perceive and control, you need to be able to tell an information story, at least in principle. So what is attunement made of?
There you find an implementation of free-energy minimization in terms of recurrent neural networks.
This cannot simply happen unless there is information somewhere along the causal chain of events that can drive the network to minimise FE rather than something else. So it can't simply be made of neurons.
FEP provides functional accounts with the aim of complementing them with mechanistic accounts. Mainly in neural networks, but conceptually perfectly extendable to the body and to the environment.
So I am right; it is mostly a functional account right now. This is ok, I'm just getting the lay of the land.
So you're suggesting there is work proposing neural mechanisms. To reframe my earlier point, this just bumps the problem of 'where is this all coming from' out one level. Why is the neural network acting to minimise FE?
This all needs a theory of perception.
Also I'm finding this all extremely interesting and useful, so thanks for pushing back :)
DeleteThe FEP doesn't require a neural network to be 'pushed' into minimising free energy, the minimisation occurs naturally from structure of the network.
DeleteIn the context of neural networks, free energy is simply the activity of "error neurons" (under Gaussian assumptions), so other nodes in the network only need to change as to reduce the activity of error neurons. They aren't 'pushed' to do this, it just happens naturally if you arrange a network in a particular way. Take a look at the example given in Bogacz - he provides source code you can play with, and it confirms (as many other papers do) that simple networks of a certain structure can perform Bayesian inference with only local information.
With this established, the far more interesting question is "why would biological systems have this "error reducing" structure? One approach to answering this is to refer to empirical evidence. Another approach is to just answer "because it works, and evolution selects schemes that work". However, there is also an argument from first principles, which is roughly:
1) biological systems maintain certain variables within bounds by virtue of their existence (e.g. body temp around 37 degrees)
2) they therefore frequent a limited area in state space (e.g. don't visit body temp 40 degrees - if they did they would not exist)
3) the only information systems have about their position in state space comes from sensory information
4) they therefore sample a limited set of sensory states (a fish sensing water)
5) this is equivalent to reducing the amount of time spent sampling unexpected sensory data (conditioned on the system in question)
The question of how biological systems maintain certain states can therefore be straightforwardly rewritten as "how do biological systems reduce the amount of time sampling unexpected sensory data?". It is infeasible that agents quantify this directly. However, given a generative model of their typical environment and the ability to change sensory data through action, minimising the activity of error neurons implicitly minimises the unexpectedness of sensory data.
So, suppose the generative model of a biological system encodes the expectation that when fire is sensed on the skin, proprioceptive sensory signals indicating 'muscle retraction' will also be sampled (the parameters of this model are encoded by neurons and their respective connections). Now suppose sensory signals indicating fire on the skin is sampled, the models expectations will not be met (as the model expects to be sampling "muscle retraction" sensory signals) and therefore error neurons will increase in activation (e.g. free energy increases). The activity of these error neurons will only be subsumed when muscle retraction sensory signals are sampled, and as the "attractor" of the system is the point where the activity of error neurons is subsumed, this is where the model will be led.
A full accounting of how a disposition comes to manifest does indeed require a specification of the complementary properties of the organism.
ReplyDeleteYes, that’s right. If you want an affordance story about agency, you need to be have a story about selective sensitivity to affordances, or, in your terminology: selectivel manifesting some dispositions rather than others. Say that when I am hungry I behave differently than when I am not. There would be no difference in the light bouncing off of stuff, since my kitchen looks exactly the same. Hence there would be no difference in the information, right? Then what explains this difference in behavior?
Our direction here is that the affordances in my kitchen are there, but the soliciations are not. What solicits me is not so much an environmental story (although what can solicit me might be), but rather a story about how the individual is situated in its environment. Crucial questions that come up here are:
- Can we give a story here that does not presuppose goals and intentions that come out of nowhere.
- How does the selective openness story deal with the relevance-problem?
- If not goals, then what drives selective openness, or that what is “aimed”? Proposals that came up in the FEP literature are homeostasis, inference of the hidden states, and we propose tending towards grip. I can talk a lit more about why this is the best account, but first you need to see the problem.
One thing that strikes me is that in the ecological literature, often information is not specifying in the moment, but only when I move around does information become 1:1 specifying. Now, what drives my moving around? And why in this way rather than another? How does the ecological psychologist explain that? The epistemic part of the FEP story would fit well with requiring actions to disambiguate between affordances, or more properly, moving around so as to perceive the 1:1 specified affordances.
Oh, sorry, I do not seem to be able to do the nice italics to quote your parts, but ther first setnence is yours.
DeleteAlso, I think pretty much the same was said in a Twitter discussion you had with Luke Miller.
Well, I’ll be short about information. I am writing a paper with Tony and Erik on the topic and I should be finishing that rather than replying to your blog ;) You heard the basic story at EWEP and you did not seem to like it very much.
Oh, and yes, I also find this an interesting and useful discussion :)
DeleteOur direction here is that the affordances in my kitchen are there, but the soliciations are not. What solicits me is not so much an environmental story (although what can solicit me might be), but rather a story about how the individual is situated in its environment.
DeleteWe agree then. Affordances are there regardless of whether there's an organism that can effect them or not and they create information that specifies them that is either detected or not. Then the manifestation of the affordance disposition (in my terms) is the act of perceiving and acting, which is indeed a relational thing.
Right now it seems like your account is not keeping all these things separated. Affordances are dispositional properties. My effectivities are complementary dispositional properties. The act of perception and action (using information about the affordance to drive the manifestation of the disposition with my effectivities) is a relational one. So I think you guys just keeping calling the wrong part of this 'affordances' (and I think that matters).
So
Say that when I am hungry I behave differently than when I am not. There would be no difference in the light bouncing off of stuff, since my kitchen looks exactly the same. Hence there would be no difference in the information, right? Then what explains this difference in behavior?
As with you, the difference is not in the affordances being offered or the information for these, but which I selectively attend to. So you're right to go looking for ways to drive this selection process! So far, though, 'grip' and the various FEP ideas remain too vague, too functional-level (and not yet obviously perceivable).
So, how do I handle selectivity? I don't fully, yet. My current sketch is that this all happens over time; over developmental time I've learned to be sensitive to some and not other information, and in the presence of that information do certain things supported by the fact that information specifies an affordance that supports that behaviour. Then, as a task unfolds in real time, some but not other information is available, I'm sensitive to some of it and not others of it, and I assemble myself into a device to complement the perceived affordance.
Part of the 'selection' is therefore done by the task space (there are only some affordances and information variables at any given time), some is done by my learning history. Then we do need a third component in which internal states like 'being hungry' has consequences that make us selectively more sensitive to food relevant information. That's a harder story right now. I'll think some more, though, to see if I can firm this up.
One thing that strikes me is that in the ecological literature, often information is not specifying in the moment, but only when I move around does information become 1:1 specifying. Now, what drives my moving around? And why in this way rather than another? How does the ecological psychologist explain that? The epistemic part of the FEP story would fit well with requiring actions to disambiguate between affordances, or more properly, moving around so as to perceive the 1:1 specified affordances.
DeleteLearning is going to be the key here again. (I put my hands up at this point to admit we have done poorly by learning.)
Infants start out with a baseline a little above 0, ie they move about a bit and they prefer any structure to randomness. It's not structured movement, but it's motion. that's the foot in the door to revealing simple invariants, which then allow a level of control that reveals the next layer of invariant, and so on. This takes bloody ages in humans, by the way.
So over time you access structural features of the optic array and use that to try and control actions, which either works or it doesn't depending on whether the information is about a dynamic world property that can support that action. You accumulate things that work which enables better control and iterate.
You also get better at learning; adults can explore and differentiate new information variables better than children.
So with a couple of trivial biases (some innate motion and a preference for structure over randomness) and enough time (say, 15-20 years) you get a nearly fully trained up adult human.
This story obviously needs more specifics too!But that's the shape of it to me right now.