tag:blogger.com,1999:blog-91925977127464326312024-03-14T03:04:01.418+00:00Notes from Two Scientific PsychologistsA brave attempt to think out loud about theories of psychology until we get someAndrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.comBlogger296125tag:blogger.com,1999:blog-9192597712746432631.post-49618127590106394992023-11-14T11:40:00.001+00:002023-11-16T10:52:44.988+00:00Do Affordances Select or Control Actions?<p><span style="font-family: arial;">I've been working for a while to connect the perception of affordances to motor abundance methods such as the uncontrolled manifold. <a href="https://cognitioninaction.files.wordpress.com/2018/11/wilson-zhu-bingham-the-ecological-approach-to-throwing-for-distance-and-accuracy.pdf" target="_blank">I proposed the idea in this book chapter</a>, and then I have <a href="https://osf.io/preprints/psyarxiv/u4j2g/" target="_blank">a paper under review</a> that is my first swing at making this work; the details of the hypothesis are in those, and I recommend you read them. The paper in particular has my most recent thinking spelled out as clearly as I can in the Introduction and Discussion sections. </span></p><p><span style="font-family: arial;">As I've been chatting to people about progressing this programme, one thing that keeps coming up is 'why has no-one thought to do this before?'. This link simply seems so obvious to me, and to the people I'm talking to, but no-one has done it. I think I may have figured out why, though, and I thought I'd document the reason because I'm a sucker for these kinds of mysteries, especially when I think I've solved them. </span></p><p><span style="font-family: arial;">I think the reason no-one has thought to connect affordances to UCM is that (almost) no ecological psychologist has been studying affordances as a solution to the control problem! Affordances are studied as solutions to the action selection problem only, and this seems to come from Warren. I'll link to some papers and lay out some thoughts for the record here. </span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><p><span style="font-family: arial;">I have always thought that <a href="https://psychsciencenotes.blogspot.com/2016/03/affordance-based-control-fajen-2005-2007.html" target="_blank">Fajen's affordance-based control framework</a> had obvious connections to what I was doing with affordances and UCM. I was re-reading his 2005 paper the other day and came across an interesting section; <i>Information-based control and affordances</i>, pg 390. I'll quote it in full, because it stuck out in a way it never had before now this question is on my mind</span></p><p></p><blockquote><i><span style="font-family: arial;">The incompatibility between information-based control and the theory of affordances is reflected in Warren’s (1988) framework for visual control, in which the affordance problem and the control problem were described as separate components. Warren proposed that actors first choose an affordance to be realized, which establishes a corresponding mode of action. Once the affordance is chosen, the task-relevant parameters on the action side are regulated by optical information according to some law of control. Such laws of control take the specific form that is found in information-based models described earlier; that is, they describe how the task can be performed by moving so as to produce a particular pattern of optic flow. Thus, the control problem (i.e., how actions are guided on the basis of visual information according to some law of control), is thought to follow and be independent of the affordance problem. In other words, within this framework, the perception of affordances plays a role, but not in the visual guidance of movement.</span></i></blockquote><p></p><p><span style="font-family: arial;">He then notes that this has been noticed before, and cites Stoffregan (2000):</span></p><p></p><blockquote><i><span style="font-family: arial;">This separation of the control problem and affordance problem is reinforced by the fact that existing empirical research on affordances emphasizes the role of affordance perception in selecting between modes of action. Stoffregen (2000) pointed out that this has led “to the impression that the primary behavioral utility of affordances is for the selection of behaviors rather than the continuous control of action” (p. 23). He leaves open the possibility that affordances are perceived and exploited in the continuous control of action but acknowledges that the theory of affordances must be developed in a way that the concept can be applied to research on the continuous control of action. This is exactly what the theory of affordance-based control, which is introduced in the following section, is intended to do.</span></i></blockquote><p><span style="font-family: arial;">So it seems that, historically, affordances have been treated as solutions to the action selection problem (modes of action, in Warren's terms), but not as solutions to the action control problem, at least until Fajen - and this is why affordance-based control was such a novel move. </span></p><p><span style="font-family: arial;">A few things that this link has clarified for me:</span></p><p></p><ol style="text-align: left;"><li><span style="font-family: arial;">Affordances really have (mostly) just been studied in terms of action selection! I actually said this in the throwing/UCM paper, so I apparently knew this, but I don't think I had realised that this was on purpose, rather than just a state of the art question. It's relatively easy to do affordance-for-selection studies; do people try to pass through that aperture or not, for example. But it seems that baked into the Warren work that guides a lot of this is a commitment that affordances are only for action (mode) selection, and not for control, so people do these studies because it's the way to do them. This paper (<a href="https://avant.edu.pl/wp-content/uploads/ABFZJSGP-On-possibilities-for-action.pdf" target="_blank">Barsingerhorn et al, 2012</a>) also discusses this fact.</span></li><ol><li><span style="font-family: arial;">In fact, I saw Bill at Progress in Motor Control in Rome recently and he explicitly said he still thinks this to me - I didn't quite process it at the time but now it makes sense. So he is still committed to this, and it explains why he thinks Brett is <i>really </i>wrong about affordance-based control. </span></li><li><span style="font-family: arial;">Bill also mentioned he thinks Chemero is right about affordances being relations, so apparently that argument is in the mix too. This makes some some sense to me: my connecting affordances to UCM was a natural outcome from my throwing work (specifically <a href="https://cognitioninaction.files.wordpress.com/2016/07/wilson-et-al-in-press-using-task-dynamics-to-quantify-the-affordances-of-throwing-for-long-distance-and-accuracy.pdf" target="_blank">Wilson et al, 2016</a>) which is explicitly derived from and committed to affordances-as-dispositional properties of task dynamics, and the connection would never emerge from affordances-as-relations. I was surprised Bill's on Team Chemero here though, I had to admit.</span></li></ol><li><span style="font-family: arial;">Tom was right, I think, in 2000 that all affordance work was about action selection (although I don't think anyone's looked properly). Since then, however, I know of at least one programme that has looked at affordances in terms of action control; Bingham's reach-to-grasp work I've been <a href="https://psychsciencenotes.blogspot.com/search/label/reach-to-grasp" target="_blank">reviewing lately</a>. At least some of these papers have explicitly examined movement kinematics with respect to the affordance properties. While I still need to look properly, I suspect this may be it.</span></li><ol><li><span style="font-family: arial;">This, in part, explains why it never occurred to me not to connect affordances to action control. I've known about Bingham's stuff for years, it was going on in the lab when I was there and Geoff, of course, taught me most of what I know. Geoff is where my affordances-as-properties-of-task-dynamics comes from, as well. What intrigues me know is why Geoff pursued affordances this way but apparently no one else did. </span></li><li><span style="font-family: arial;">This issue is also at the heart of my vague discontent with the affordance literature. I've thought for a long time that we haven't been fully walking our talk about affordances, and I had connected that to the fact we aren't doing enough information about affordances research. I know think that all this might be connected to the fact people aren't studying affordances for action control. </span></li></ol><li><span style="font-family: arial;">Fajen's affordance-based control is absolutely the right move. I've know this ever since I finally got my head around it, and I've been linking it to my thinking about affordances and UCM from the start. But I also now see much more clearly the context he was working in, and why it was such a radical shift. Understanding the context is a really interesting part of connecting my work to the rest of ecological psychology, so getting clear on this link is important for my thinking. </span></li></ol><div><span style="font-family: arial;">What I've learned is that affordances-for-action-control is not as obvious as I thought it was, and neither was Fajen's affordance-based control. I need to get my head much more deeply into Warren's approach because I need to understand the differences, as well. I think affordances for both selection <i>and</i> control is the way to go, but apparently I will have work to do to convince everyone!</span></div><div><span style="font-family: arial;"><br /></span></div><h3 style="text-align: left;"><span style="font-family: arial;">References</span></h3><div><span style="font-family: arial;"><a href="https://avant.edu.pl/wp-content/uploads/ABFZJSGP-On-possibilities-for-action.pdf" target="_blank">Barsingerhorn, A. D., Zaal, F. T., Smith, J., & Pepping, G. J. (2012). On possibilities for action: The past, present and future of affordance research. <i>Avant, 3(2)</i>, 54-69.</a></span></div><div><span style="font-family: arial;"><br /></span></div><div><div><a href="https://osf.io/preprints/psyarxiv/u4j2g/" target="_blank"><span style="font-family: arial;">Bennett, T., Thomas, L., & Wilson, A. D. (2023, April 3). Affordances for Throwing: An Uncontrolled Manifold Analysis. https://doi.org/10.31234/osf.io/u4j2g</span></a></div></div><p></p><div><a href="https://www.tandfonline.com/doi/pdf/10.1080/10407410701557877" target="_blank"><span style="font-family: arial;">Fajen, B. R. (2007). Affordance-based control of visually guided action. <i>Ecological Psychology, 19(4)</i>, 383-410.</span></a></div><div><span style="font-family: arial;"><br /></span></div><div><a href="https://www.tandfonline.com/doi/pdf/10.1207/S15326969ECO1201_1" target="_blank"><span style="font-family: arial;">Stoffregen, T. A. (2000). Affordances and events. <i>Ecological Psychology, 12(1)</i>, 1-28.</span></a></div><div><span style="font-family: arial;"><br /></span></div><div><a href="https://www.sciencedirect.com/science/article/pii/S0166411508625649" target="_blank"><span style="font-family: arial;">Warren, W. H. (1988). Action modes and laws of control for the visual guidance of action. In<i> Advances in psychology (Vol. 50, pp. 339-379)</i>. North-Holland.</span></a></div><div><span style="font-family: arial;"><br /></span></div><div><a href="https://cognitioninaction.files.wordpress.com/2016/07/wilson-et-al-in-press-using-task-dynamics-to-quantify-the-affordances-of-throwing-for-long-distance-and-accuracy.pdf" target="_blank"><span style="font-family: arial;">Wilson, A. D., Weightman, A., Bingham, G. P., & Zhu, Q. (2016). Using task dynamics to quantify the affordances of throwing for long distance and accuracy. <i>Journal of Experimental Psychology: Human Perception and Performance, 42(7),</i> 965-981. </span></a></div><p></p><div><a href="https://cognitioninaction.files.wordpress.com/2018/11/wilson-zhu-bingham-the-ecological-approach-to-throwing-for-distance-and-accuracy.pdf" target="_blank"><span style="font-family: arial;">Wilson, A. D., Zhu, Q., & Bingham, G. P. (2019) Affordances and the ecological approach to throwing for long distances and accuracy. In <i>M. Cappuccio (Ed.), The Handbook of Embodied Cognition and Sport Psychology, pp. 581-600</i>. MIT Press.</span></a></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-70669019057610506442023-05-18T10:08:00.002+01:002023-05-18T10:08:11.234+01:00Effectivities for Reach-to-Grasp Actions<p><span style="font-family: arial;"><a href="http://psychsciencenotes.blogspot.com/2023/05/affordances-for-spatial-structure-of.html" target="_blank">I just reviewed the affordance properties that produce the spatial structure in reach-to-grasp actions</a>, and there's an unquestioned assumption lurking in that analysis. Luckily, <a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2011_Mon-Williams_&_Bingham_(2011).pdf" target="_blank">Mon-Williams & Bingham (2011)</a> actually did question it, so I wanted to cover that part of the paper here.</span></p><p><span style="font-family: arial;">The assumption in the analyses I described last time is that the unit of action is the aperture between the finger and thumb, and not the fingers themselves. Bingham refers to this as an <i>opposition vector</i> (<a href="https://palab.sitehost.iu.edu/Resources/Publications/1980-1989/Iberall_Bingham_Arbib1986.pdf" target="_blank">Iberall, Bingham, & Arbib, 1986</a>; <a href="https://www.academia.edu/download/37404381/van_Bergen_et_al__07_-_Eff_f_orientation_on_prehention_MT.pdf" target="_blank">van Bergen et al, 2007</a>). In some ways, this is a weird idea; the action system working to control a space between limbs, and not the limbs! <a href="https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=3c2848ddd06061a95c77850f3b33011682c1105f" target="_blank">Smeets & Brenner (1999)</a> proposed that grasping is actually about the two limbs. Mon-Williams & Bingham tested these hypotheses and found evidence in favour of the opposition vector.</span></p><p><span style="font-family: arial;">I want to walk through this in a little detail, though, as of course identifying the relevant elemental variables is part of an UCM analysis, and affordance research helps here too. The task analysis that reveals affordance property options also points to effectivity property options (at least it should - these are complementary after all!). But another part of the UCM approach is that it can, in principle, test hypotheses about elemental and performance variables, so I want to lay this out as well.</span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h3 style="text-align: left;"><span style="font-family: arial;">Opposition</span></h3><div><span style="font-family: arial;">Hands provide us with the usual problem: a huge number of degrees of freedom, so what constrains how we use these so the control problem is manageable? Iberall et al (1986) take a task analysis approach; examining what is required to achieve various goals and using that to constrain the analysis. It's an early version of the affordance analysis Bingham does in the 2011 paper (Bingham was doing a lot of this sort of thing in the 80s: basically doing affordances without using the word as a way to get the stuff published). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Reaching to grasp and manipulate objects imposes a variety of constraints on hand posture. Iberall et al detail how these constraints are mostly about forces, and specifically the need to balance and cancel out a variety of force vectors. Lifting an object means overcoming gravity, for example; maintaining a grip means generating enough friction, and so on. A stable grip is one where all the forces are in equilibrium; this is generally achieved by <i>opposition</i>. If you place two surfaces parallel to one another (within the tolerances afforded by friction), these can each exert a force of equal magnitude in opposite directions along an <i>opposition axis</i>. What forces remain to be wrangled then depends on where that opposition axis is relative to the object centre of mass. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">In reach-to-grasp movements, this opposition axis lives in the space between the finger and thumb (the two roughly parallel and opposable surfaces). This axis has a length (the grip aperture) and an orientation, and the goal of a reach-to-grasp movement is to place this axis across an object in a way that enables the various forces to be balanced out. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This analysis is a functional (affordance) analysis of the key constraint on hand degrees of freedom and their resulting kinematics, specifically force balance. This task dynamical affordance analysis then implies an effectivity, the opposition axis. van Bergen et al (2007) adapted the opposition axis idea to an <i>opposition vector</i>, to emphasise that this axis has both a magnitude and an orientation, but this doesn't fundamentally alter the concept. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Mon-Williams & Bingham (2011) explicitly analysed the spatial behaviour of this axis as a function of target affordance properties, because of this functional task analysis of reaching-to-grasp. Fingers get placed so as to place the axis somewhere appropriate to the object, and here the axis is the unit of analysis and the proposed unit being controlled.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Finger Control</span></h3><div><span style="font-family: arial;">There is another way to create the necessary opposition axis, and that is to independently control the fingers and thumb and land them separately. This was proposed by Smeets & Brenner (1999). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">As you can tell by the dates, Smeets & Brenner weren't targeting the affordance analysis. Their problem was with the more standard analysis of reaching coming from the classic work of Jeannerod. This work identified the two major components of reach-to-grasp actions (transport and grasp), identified that they were distinct components (transport is about the wrist/hand system, grasping is about the fingers), and identified that each depended on different properties of the object to be grasped (transport depends on extrinsic properties such as object location in space, grasp depends on intrinsic properties such as object size). This comes with another element for Jeannerod, specifically that intrinsic and extrinsic properties are processed via separate neural channels.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Smeets & Brenner first note that because objects are typically not symmetrical, it can readily be shown that this extrinsic/intrinsic transport/grasp distinction does not hold up; intrinsic property changes affect transport and so on. Second, the anatomical distinction between transport (of the hand) and grasp (using the fingers) also isn't clean, either in terms of the neuroanatomy or even behaviourally (for example, Alan Wing famously proposed that what is being transported is primarily the thumb, not the wrist). So these two components aren't cleanly separable in terms of perception or action distinctions (defined very internally, I'll note). Finally, they note that (at the time they wrote) there was only one <i>model</i> of grasping, and it was was designed to reproduce the empirically observed kinematics, rather than explain how these might emerge. Essentially it had done the standard motor control thing, which was to take the observed kinematics and simply credit all of them to custom built controllers. These worries are, I think, all broadly warranted.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">They then make their move. Wing thinks the thumb is what is being transported and targeted because thumb variability decreases on approach while the wrist variability remains the same. Based on one paper and a personal communication with Wing, Smeets and Brenner propose the same could be true of the finger. So, to quote them, they 'abandon the grip as a variable in our model of grasping'. They take the Iberall functional analysis, and decide that the key to grasping is finding the right positions to grasp the objects, and landing the fingers there. They then argue that all that's required to land on these positions is to control the fingers so as to end up perpendicular to the surfaces. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This is where it all falls apart:</span></div><blockquote><div><span style="font-family: arial;">To plan how to grasp an object, the nervous system begins by determining suitable <i>positions</i> on the object's surfaces. How these positions are determined is a problem beyond the scope of this paper. Our approach is to leave out any other information processing (determining the object's size, etc) and regard grasping as nothing more than moving the thumb and fingers to these positions</span></div></blockquote><p><span style="font-family: arial;">This is, frankly, an insane thing to say. First, it does exactly what they accuse the earlier model of doing: simply giving the model a solution to a key part of the problem. As Mon-Williams & Bingham identify, key structure in the behaviour emerges from the system figuring out where to land, so simply presenting the model with a solution is a terrible modelling strategy. Second, to say that grasping is about placing the finger and thumb in good positions for the object, and to then say figuring out how those positions are identified is beyond the scope of this work, is astonishing and frankly I'm amazed it got published. </span></p><h3 style="text-align: left;"><span style="font-family: arial;">So Which Is It?</span></h3><div><span style="font-family: arial;">Smeets & Brenner review a couple of papers that suggest the finger and thumb do get controlled separately sometimes and that their kinematics do separately look like pointing movements. But they never ask the key question, which is do the finger and thumb look like they are working as a synergy? Mon-Williams & Bingham test exactly this, by looking at the variability of the thumb and finger positions at the terminal grip aperture. If Smeets & Brenner are right, these variabilities should be independent of each other; if they are instead working as a synergy, the variability should be negatively correlated (as variation in one is compensated for by variability in the other). The result was clear: the finger and the thumb variability show the clear signature of a synergy (or coordinative structure, to use the term Mon-Williams & Bingham use). Another study perturbed the thumb target and found effects on both the thumb and finger trajectories, more evidence they are coupled to each other (Van de Kamp & Zaal, 2007). This data clearly favours the opposition axis analysis.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Summary</span></h3><div><span style="font-family: arial;">Any affordance hypothesis implies an effectivity hypothesis too, because these are complementary dispositions. The affordance analysis of reaching-to-grasp came with an effectivity analysis of the hand acting to create an opposition axis for the nervous system to control. The data strongly support both the affordance and the effectivity analysis over the alternative independent targeting hypothesis. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The Smeets & Brenner hypothesis suffers from several fairly serious conceptual flaws. The most important is to simply present the target positions to the model as a given, without engaging with the process of perceiving where to land given the task dynamics. This analytically separates perception from action and treats perception as the easy bit. Ecologically, this is <i>the</i> conceptual error; empirically, this analysis missed the fact that the process of perceiving where to land affects reach kinematics (this is the Mon-Williams & Bingham data). All this points to the problem of subtracting out part of a nonlinear process and expecting linear consequences. It's genuinely wild to me that this analysis gets any time at all, frankly (cited nearly 600 times!!)</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-25872436674191794722023-05-16T09:51:00.005+01:002023-05-17T09:04:48.354+01:00Affordances for the Spatial Structure of Reach-To-Grasp (Mon-Williams & Bingham, 2011)<p><span style="font-family: arial;">I have reviewed <a href="http://psychsciencenotes.blogspot.com/2023/05/the-spatial-and-temporal-structure-of.html" target="_blank">the spatial and temporal structure of reach-to-grasp movements</a>, and <a href="http://psychsciencenotes.blogspot.com/2023/05/the-task-dynamics-of-reaching-to-grasp.html" target="_blank">the task dynamical analysis that has connected affordance properties and reach-to-grasp effectivities</a>. Now it's time to work through some papers exploring this task using this analysis.</span></p><p><span style="font-family: arial;">The first paper is <a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2011_Mon-Williams_&_Bingham_(2011).pdf" target="_blank">Mon-Williams & Bingham (2011)</a>. The goal of this paper was to investigate what target properties shape the spatial structure of the reach-to-grasp movement. This means the behaviour of the hand as it forms a grip aperture and then encloses and contacts the object. Specifically, we want to examine the maximum grip aperture (MGA), which occurs partway through the reach and is larger than the object, and the terminal grip aperture (TGA), which occurs when the hand has stopped moving and the hand encloses the object, but before the fingers are in contact with the object. The question is, what object properties determine how these apertures are scaled? </span></p><span style="font-family: arial;"><a name='more'></a></span><h2 style="text-align: left;"><span style="font-family: arial;">Design</span></h2><h3 style="text-align: left;"><span style="font-family: arial;">Independent Variables</span></h3><div><span style="font-family: arial;">Targets were placed at a constant 30cm distance from the hand's start location, directly in front of the participant. Targets were all 9cm high, and 4cm deep, with a dowel inserted through the top to create landing sites. They varied in Width (3 levels) and Grip Area (3 levels); there were therefore 9 different objects, each with a corresponding maximum object extent (MOE). See Figure 1. </span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7gvQFvQV3Ti-m8l8uwCYPeRSTrcFkjZ9RUQlVOh-G1vzYWtmhOavwLcPCa-U02uq0_67uhBhMfTNza3oBGr01m86mda1f8laLCAwvvedsXKLCWg-4EarmBDa9uhtR7ENpah-wz_olmNd4rmV3jPHogr59BjHqRv4gV1GRDUjxKOUU3ATesCCthDr6/s4400/Targets.png" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="3379" data-original-width="4400" height="246" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7gvQFvQV3Ti-m8l8uwCYPeRSTrcFkjZ9RUQlVOh-G1vzYWtmhOavwLcPCa-U02uq0_67uhBhMfTNza3oBGr01m86mda1f8laLCAwvvedsXKLCWg-4EarmBDa9uhtR7ENpah-wz_olmNd4rmV3jPHogr59BjHqRv4gV1GRDUjxKOUU3ATesCCthDr6/s320/Targets.png" width="320" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Figure 1. Maximum Object Extents and Angles for each of the 9 targets</span></td></tr></tbody></table></td></tr></tbody></table><div class="separator" style="clear: both; text-align: left;"><span style="font-family: arial;"><br /></span></div><span style="font-family: arial;">The maximum object extent is the hypotenuse of a right-angled triangle, with Object Width and Dowel Diameter as the other two sides. This has a useful feature: it makes the relationship between MOE and Width non-linear, which will help us tell their effects in grip apertures apart later on. </span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">One final factor was Reach Speed (Normal, Slower, Faster). These were self paced and different for each participant, but reliably produced by each participant. This is a typical way to manipulate things when studying affordances: people will all have a different normal/comfortable speed so setting specific speeds as targets will affect people differently. This way, everyone is reaching in similar ways relative to their abilities/intrinsic dynamics. It is an assumption, though, so worth testing in the future. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Participants made 10 reaches in each object condition (3 Widths x 3 Grasp Surface Size x 3 Speeds) for a total of 270 reaches. </span></div><div><h3 style="text-align: left;"><span style="font-family: arial;">Dependent Variables</span></h3></div><div><span style="font-family: arial;">Three markers were placed on the participant's hand; one on the wrist, on on the thumb, one on the index finger. From these position time series we can compute all the relevant spatial and temporal features of the reach-to-grasp action. Hand movement beginning and end were set as the times when the wrist marker speed first rose above and then first fell below 5cm/s (a threshold that copes with measurement error and small movements in an essentially still wrist). There was also a marker on the target, used to confirm it was not moved.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The grip aperture is the distance between the thumb and finger markers. This reaches a maximum (the MGA) and also has a value when the wrist stops (the TGA). This aperture, if treated as a vector, also has an orientation in the roll axis; this is the Angle. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">All variables were averaged across the 10 repetitions.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Analysis</span></h3><div><span style="font-family: arial;">The primary analysis used was multiple regression, trying to predict average Maximum Grip Aperture, Terminal Grip Aperture, and TGA Orientation. Factors included were Grasp Surface Size, Width, Maximum Object Extent, Angle, and Speed (coded as -1, 0, +1). Two regression models were applied:</span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">Affordance properties: MOE, Angle, Speed, MOE x Speed, Angle x Speed, MOE x Angle</span></li><li><span style="font-family: arial;">Basic Properties: Grasp Surface Area, Width, Speed, Grasp x Speed, Width x Speed, Grasp x Width</span></li></ol><div><span style="font-family: arial;">In each analysis, non-significant factors were removed until only significant factors remained. The goal is to see which object properties are required to explain variability in the spatial structure of the grip. </span></div></div><h2 style="text-align: left;"><span style="font-family: arial;">Experiment 1 - Reach-to-Grasp Upright Objects Without Moving Them</span></h2><h3 style="text-align: left;"><span style="font-family: arial;">TGA Grip Orientation</span></h3><div><span style="font-family: arial;">The only significant factor was maximum object extent. As MOE increased, Angle decreased, and there was structure in the variability: standard deviations of the orientation were larger for larger angles. See Figure 2 in the paper. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This result doesn't quite make sense to me; the available Angle range doesn't scale linearly with MOE and so it's strange to me that the grip aperture orientation is scaling with MOE and not Angle. I suspect it will be worth looking into this aperture orientation in finer detail throughout the reach. However, one key result is that there is plenty of variability in the aperture orientation, which confirms it is a degree of freedom that needs to be controlled and that maximum-object-extent really is likely to be an important object property.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Maximum Grip Aperture Size</span></h3><div><span style="font-family: arial;">The primary finding was the MGA was scaled as a function of the Maximum Object Extent, and with Speed (bigger MGA for larger MOE and higher Speed). In the regression, the significant terms were MOE, Speed, and MOE x Speed. See Figure 3 in the paper. MGA is a simple linear function of the MOE, with a Speed specific safety margin added in (each condition added about 11% to the margin, a nice result suggesting the speed manipulation was working as intended). </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Terminal Grip Aperture</span></h3><div><span style="font-family: arial;">The primary finding was that the TGA was scaled as a function of Object Width (bigger TGA for wider objects) and Speed (smaller TGA for higher Speeds). In the regression, the significant terms were Width, and Speed x Grasp Surface Area. See Figure 3 in the paper (note that it's a busy figure - a Bingham classic. Just pay careful attention to the legend and caption). TGA is a simple linear function of Object Width, with a decreasing safety margin as Speed increased. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Experiment 2: Reach-to-Grasp and Lift</span></h2><div><span style="font-family: arial;">This replicated the design of Expt 1, but with the added instruction that the objects were to be picked up. There are two solutions to this version; simply add a lift to the patterns seen in Expt 1, or pick the objects up without slowing to a stop. Participants produced both solutions; fly-throughs were more common for large Angles (an easier grasp task) and for higher speeds (saves slowing down). Only grasp dynamics mattered for this decision, it seemed.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Trials were split as a function of whether people Stopped or Flew Through. Stop trials are the reach solution seen in Expt 1, so these were checked to see if they replicated that pattern. All the results from above replicated in these trials. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Fly Through trials don't have a TGA (the wrist never stops moving) so they compared the MGA orientation and size behaviour. They could only use the Medium speed condition, as there were too few fly throughs at low speeds and too few stops at high speed. In both types of reaches, though, the results were the same. MGA orientation varied only with MOE. MGA size varied with MOE and Angle, although the latter only explained 0.7% of the variance. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Fly through and Stop trials were, then, basically the same structure with the former 15% faster. The spatial structure of both reach types are responding to the same task dynamics.</span></div><h2 style="text-align: left;"><span style="font-family: arial;">Experiment 3: Slam</span></h2><div><span style="font-family: arial;">One of the things that had inspired this study was the fact that reach-to-grasp is often studied with objects lying down on the table. This removes the collision avoidance dynamics, as the object can't be knocked over, and the MOE only emerges as a relevant affordance property from those dynamics. In addition, the hand can simply be stopped by the table, so the whole task structure changes. A third experiment looked to see what this set up does to the spatial structure of reaches.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">In this study, objects of 5 different lengths were placed on flat on the table and participants had to grasp them (no lifting) along that length, at one of three speeds. (It's unclear why the objects and grasp task were altered in this way.) </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Under these conditions, MGA increased with Object Size (not MOE) and with Speed. TGA again increased with Object Size and decreased with Speed. In addition, the safety margins were reduced and (for larger objects) essentially zero. Collision-avoidance is not a dynamic for reaches under these conditions; in fact, participants simply took advantage of being able to use the table to stop their fingers, rather than slowing the wrist and placing the fingers. The change in task dynamic produced qualitatively different reaches. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Summary</span></h2><div><span style="font-family: arial;">There's a ton going on in this paper and I still haven't covered it all. The goal here was just to go over the basic methods, analyses, and design issues. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The first key move was to test the (actually typical) case of reaching to grasp an object you could knock over (following Rosenbaum et al, 1999). The second was to split out the various task dynamics and relating them to different parts of the reaching behaviour. The net result: the maximum object extent is a key property picked out by the collision-avoidance dynamics, and it's what the maximum grip aperture is scaled by. Remove those dynamics (Expt 3) and the only relevant property is object size; MGA is then scaled by that. Adding a lifting component (Expt 2) creates the possibility of two different responses (Stop vs Fly Through) but it doesn't seem to affect the transport then grasp while avoiding collision dynamics (both kinds of reach behave the same up until contact with the object). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I'm still trying to figure out why the grip orientation was affected by MOE the way that it was. It seemed obvious that it should be being constrained by the Angle defined by the MOEs (see Figure 1), but it wasn't. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The next trick is to figure out the form of a UCM/TNC analysis. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-41703621717036600372023-05-11T11:30:00.001+01:002023-05-15T10:20:38.309+01:00The Task Dynamics of Reaching-to-Grasp<p><span style="font-family: arial;">In the <a href="https://psychsciencenotes.blogspot.com/2023/05/the-spatial-and-temporal-structure-of.html" target="_blank">last post</a>, I reviewed the basic form of the reach-to-grasp task and the basic spatial and temporal structure of the resulting reach-to-grasp action. I'm shortly going to review three papers by Bingham about where all this structure comes from, but first I wanted to sketch out the task analysis those papers will rely on. </span></p><p><span style="font-family: arial;">The question at hand is, in the context of reaching-to-grasp an object, what are the relevant object affordances? What follows is derived from <a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2011_Mon-Williams_&_Bingham_(2011).pdf" target="_blank">Mon-Williams & Bingham (2011)</a>, which I will review fully in the next post. I've tried to fully flesh it out, though, to be as complete as possible. The goal is to lay out the likely relevant task dynamics; this leads to specific predictions about which manipulations should affect which parts of the reach-to-grasp action.</span></p><span style="font-family: arial;"><a name='more'></a></span><div style="text-align: left;"><span style="font-family: arial;">Overall, reaching-to-grasp an object is a task dynamic made up of three separable other task dynamics; <i>transport, then grasp, while avoiding collision</i>. Each of these three dynamics pick out a different set of object properties (affordances) and organism properties (effectivities) that must be made to complement each other. When the whole set is brought together, a reach-to-grasp happens.</span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><h3 style="text-align: left;"><span style="font-family: arial;">Reaching as Two Targeting Tasks</span></h3><div><span style="font-family: arial;">The first feature of a reach-to-grasp movement is that it is two targeting tasks. The hand must be moved so as to intercept a target (<i>transport)</i>, and the fingers & thumb must be targeted to land in a specific location on the object (<i>grasp</i>). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">There's a few reason to think these are separable task dynamics. First, you can transport without grasping, and grasp without transporting. Second, they pick out different object properties (different affordance properties). Third, these properties affect different measurable parts of the reach-to-grasp action. Fourth, there is a distinct break in the movement; the smooth reaching movement gets the hand to the target, the wrist comes to a stop, and then the fingers land. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">To successfully transport the hand to the target, the system needs to perceive where the object is (heading in 3D space, and (with no obstacles) straight line distance, in an egocentric frame of reference). Evidence that targeting task dynamics matter include the fact that reach kinematics are scaled by distance (peak speed increases with required distance, for example). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">To successfully land the fingers and thumb and grasp, the system needs to perceive the landing area; where it is (but that's part of targeting the whole hand) and how big it is (smaller landing area requires a more precise targeting). Evidence that these targeting dynamics matter includes movement time increasing for smaller objects (longer deceleration phase; <a href="https://www.researchgate.net/profile/Reinoud-Bootsma/publication/15143321_The_speed-accuracy_trade-off_in_manual_prehension_Effects_of_movement_amplitude_object_size_and_object_width_on_kinematic_characteristics/links/02bfe50d2ae940e261000000/The-speed-accuracy-trade-off-in-manual-prehension-Effects-of-movement-amplitude-object-size-and-object-width-on-kinematic-characteristics.pdf" target="_blank">Bootsma et al, 1994</a>), and the existence of two distinct phases in the reach movement (hand speeds up then comes to zero, and <i>then </i>the fingers are landed). </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Reaching as Obstacle Avoidance</span></h3><div><span style="font-family: arial;">This is somewhat counter-intuitive, but part of successfully reaching-to-grasp is not initially hitting that target! If you reach to grasp an object and don't form a wide enough aperture, your fingers will hit the object and maybe knock it over, or damage it, or otherwise interfere with successfully grasping it <a href="https://link.springer.com/content/pdf/10.1007/s002210050823.pdf" target="_blank">(Rosenbaum et al, 1999)</a>. This is a distinct constraint from targeting, and different dynamical properties of the object matter. To figure these out, we need to consider the degrees of freedom in the hand's motion that can lead to collision or no collision. </span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><div style="text-align: left;"><span style="font-family: arial;">First, the size of the object clearly matters for the obstacle avoidance part, you have to be able to fit the object into the gap created by the hand. But it's not simply the width of the object! Reaching entails targeting an aperture formed by a finger and thumb, and that aperture has a length but also an orientation in 3D space (<a href="https://link.springer.com/article/10.1007/s00221-006-0722-1" target="_blank">van Bergen et al, 2007</a>). The rigid wrist-thumb-forefinger system can pitch, roll, and yaw and therefore so can the aperture.</span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisUVSw9H_PeEUltcXwh5K8zBchXm3vfW7ZcpbxWTgl0IaU4aqSZNFKolDjQdsdKqTE9BF2Br4mIKFJN1_ATrXpp-p6sVrJEW3wayLw8a9AR1_M-i_76Y59sl05hWTh-x9Vrtz3jV2kbzRslOKW6uwElKB6bAW-PUfFCR0daXjbPhfikSYOKc3lekU5/s850/a-Pitch-yaw-and-roll-angles-of-an-aircraft-with-body-orientation-O-u-v-original.ppm" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="603" data-original-width="850" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisUVSw9H_PeEUltcXwh5K8zBchXm3vfW7ZcpbxWTgl0IaU4aqSZNFKolDjQdsdKqTE9BF2Br4mIKFJN1_ATrXpp-p6sVrJEW3wayLw8a9AR1_M-i_76Y59sl05hWTh-x9Vrtz3jV2kbzRslOKW6uwElKB6bAW-PUfFCR0daXjbPhfikSYOKc3lekU5/w320-h227/a-Pitch-yaw-and-roll-angles-of-an-aircraft-with-body-orientation-O-u-v-original.ppm" title="Figure 1. Pitch, roll, yaw" width="320" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Figure 1. Pitch, roll, yaw</span></td></tr></tbody></table><span style="font-family: arial;"><br /></span><div><span style="font-family: arial;">A reach involves placing a finger and a thumb on either side of an object; it effectively only interacts with two dimensions at a time. For a reach-to-grasp using the vertical sides, pitch variation has no collision implications (it would matter for a grasp using the top and bottom, or if you are an airplane). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Using the vertical sides, yaw matters, because it reduces the effective length of the grasp aperture (foreshortening) and may also move the finger or thumb to within the bounds of the object. You can eliminate the concern about yaw by forcing people to grip the object on points laid out in the fronto-parallel plane, or you can investigate it's effects by varying the landing locations allowed (that tends to get studied in the context of shape perception, e.g. <a href="https://palab.sitehost.iu.edu/Resources/Publications/2000-2009/Lee_etal2008.pdf" target="_blank">Lee et al, 2008</a>). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">For the standard lab reach-to-grasp task, the big issue is roll. If your hand is tilted, the functional width of the object is the length of the diagonal (effectively, the hypotenuse of a right-angled triangle; see Figure 2). Scaling your grip aperture to the width will be too short unless your tilt is zero. Given that reaching involves transport but also targeting for grasp constrained by the need to avoid a collision, this diagonal (which Mon-Williams & Bingham call the Maximum Object Extent, MOE) is the relevant size property. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Roll creates a related issue about size, this time for the grasp targeting component. Roll must not be allowed to exceed the angle defined by the target landing areas, or else the fingers will miss. So it's again not object size per se that matters, but this angle; this should constrain the orientation of the hand and it defines the MOE.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">To summarise: </span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;"><i style="font-weight: bold;">Targeted Transport of the Hand:</i> the relevant object property is its location (distance and heading in 3D egocentric space). With no obstacles present, this distance is the straight-line distance from the hand start location to the target. You could simply pick a speed and only ever use that, but that's inefficient. What you want is to accelerate to the highest controllable speed. Control is affected by both the transport dynamics (hand inertia, and the resulting requirements to accelerate but then also decelerate the hand in the given space) and also the targeting dynamics (speed-accuracy issues). Target distance should affect the timing and magnitude of peak speed, while target size should affect the magnitude of peak speed. </span></li><li><span style="font-family: arial;"><i style="font-weight: bold;">Targeted Grasping of the Object:</i> this occurs after the hand has come to rest; the fingers hover momentarily with a Terminal Grip Aperture (TGA) slightly larger than the object before being placed. By this time, the finger-thumb axis is aligned with the target, so the relevant object property is the object width (and not the maximum object extent). TGA should scale with object width.</span></li><li><span style="font-family: arial;"><b><i>Collision Avoidance:</i></b> a collision happens when the hand hits the target in any way other than a targeted grasp. A reaching hand has formed an aperture, which is a gap; so that gap must be big enough to go around the object safely, under the worst case scenario. The worst case scenario that still produces a successful grasp is when the hand has rolled such that the fingers land on the most extreme parts of the landing site; the effective object size becomes the hypotenuse of a triangle, with the width along one edge and the height of the landing area the other edge. Two properties of the target are relevant now; the effective size (the Maximum Object Extent) and the angle at which that MOE lies (actually twice that angle, because roll can go left or right). The Maximum Grip Aperture (MGA) should scale not with object size, but with MOE; that scaling will include a safety margin; and the orientation of the grip aperture should be constrained by the MOE angle (no variability allowed beyond that range). </span></li></ol><div><span style="font-family: arial;">To enable them to study all these elements, Mon-Williams & Bingham developed these objects (see Figure 2)</span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuf-VWOjM0lXvdhw-wxbW34ehjfwZGO-e-KYU0_bumv98Jha8CYum0lRkO4M-HiQqZ_7UU4kI6jOlh44Br9yZRrUUWIbKjg5oIdzjKu2-Nbit1H1D5Xv7pKFpzXTfPP0sger3u13RIHw8yDeHbC0ixJ_0OtJPr581ZeSIngcuZ7sMs2yqglp6tYjlH/s862/ObjectToGrasp.jpg" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="720" data-original-width="862" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuf-VWOjM0lXvdhw-wxbW34ehjfwZGO-e-KYU0_bumv98Jha8CYum0lRkO4M-HiQqZ_7UU4kI6jOlh44Br9yZRrUUWIbKjg5oIdzjKu2-Nbit1H1D5Xv7pKFpzXTfPP0sger3u13RIHw8yDeHbC0ixJ_0OtJPr581ZeSIngcuZ7sMs2yqglp6tYjlH/s320/ObjectToGrasp.jpg" width="320" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Figure 2. Objects from Mon-Williams & Bingham (2011)<br /></span></td></tr></tbody></table><span style="font-family: arial;"><br /></span><div><span style="font-family: arial;">The task is to reach-to-grasp these objects by placing a finger and thumb on each of the two knobs. These can be varied in area and width, to alter the accuracy requirements and alter the maximum object extent and angle. The object itself can also vary in its width. </span></div></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We now have a range of things to measure (see <a href="https://psychsciencenotes.blogspot.com/2023/05/the-spatial-and-temporal-structure-of.html" target="_blank">the last post</a>) and a range of properties we have reason to manipulate (from the task dynamical affordance analysis). Over the years there have been about a million experiments in this task, but in the next post I will begin by reviewing experiments by Bingham explicitly motivated by the affordance analysis. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-23116158183274955992023-05-09T11:10:00.002+01:002023-05-11T11:16:30.881+01:00The Spatial and Temporal Structure of Reach-to-Grasp Movements<p><span style="font-family: arial;">This post is part of my current series where I am developing a research programme to formally connect affordances and motor abundance analyses (see the intro post <a href="http://psychsciencenotes.blogspot.com/2023/05/motor-abundance-affordances-for.html" target="_blank">here</a>). The first thing I need is a task that has a formal affordance analysis; my chosen task is reaching-to-grasp (sometimes called prehension). I'm pivoting to this task from <a href="https://psyarxiv.com/u4j2g/" target="_blank">throwing</a> for a few reasons. First, I need a simpler task. The logistics of data collection and analysis for throwing tasks are large and I just don't have the resources right now; I do have the kit for these studies. Second, I want to expand my own skill set to a new domain, and reach-to-grasp is another venerable topic. Finally, it means I get to spend time really grappling with other work by Geoff Bingham, who was my PhD supervisor and who remains the person who's work I admire the most. </span></p><p><span style="font-family: arial;">This post is about the basic form of the reach-to-grasp movements studied in the lab, what some of the key measures are, and the fact these are sensitive to task demands (suggesting they are emerging from an active perception-action loop). In the next post I will work through the initial task analysis that leads us to candidate affordance properties. <br /></span></p><h4></h4><p></p><span style="font-family: arial;"><a name='more'></a></span><h3 style="text-align: left;"><span style="font-family: arial;">Reach-to-Grasp Movements</span></h3><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWr5bct5PcMrCMr7Vz_pFSsRqkUb8L6QdQiJ4VIZAz0nSRUjC4gXqKAXsRcXXbA5QaHjboGizVhPxjp6nJiJNiOkv3tNOItFORnMX6ESjYAQ_vTELJcqOmKfY9uE_-FTMauB68cdGHI6H1BJmCRn4Xu5QRpQH4E_g9efZqenZNss6lcsSNcC9DBrfr/s4032/reach-to-grasp.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><span style="font-family: arial;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWr5bct5PcMrCMr7Vz_pFSsRqkUb8L6QdQiJ4VIZAz0nSRUjC4gXqKAXsRcXXbA5QaHjboGizVhPxjp6nJiJNiOkv3tNOItFORnMX6ESjYAQ_vTELJcqOmKfY9uE_-FTMauB68cdGHI6H1BJmCRn4Xu5QRpQH4E_g9efZqenZNss6lcsSNcC9DBrfr/s320/reach-to-grasp.jpg" width="320" /></span></a></div><span style="font-family: arial;">We have hands with opposable thumbs mounted on extendable arms, so one of the things we do a lot is reach out and grasp objects. There are of course many ways to do this, but the basic form of the movement is captured by the sort of task in the picture. </span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">There is an object, with a certain size and location relative to us. We begin with our hand near us, with fingers and thumbs together. This forms a pincer between the fingers and thumb (this is what 'opposable' means). We have to size this gap (called the<i> aperture</i>) to suit the object and then steer that shaped hand to enclose the object. We then need to close the gap to contact the object. We then usually pick it up; a stable lift needs the force vector between the fingers and thumb to be placed across the object, so where we land our fingers matters.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">You could just open your hands as wide as they'll go and move to enclose the object, but of course we don't work like this. Well-functioning reach-to-grasp movements have a consistent spatial and temporal structure that reflects the online control of the action in the context. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The hand begins with the fingers and thumb together at some distance from a target with a specific set of properties. The hand then begins to move, the finger and thumb start to separate and form the grip aperture. That aperture reaches a maximum, as does the speed; then the hand is decelerated to come in for a landing. The hand stops, with the grip aperture smaller than it's peak but still bigger than the object. Then the fingers are landed, and the object has been grasped. So, in order, the things that get measured:</span></div><div><ul style="text-align: left;"><li><span style="font-family: arial;">The acceleration profile (the way in which speed changes over the reach). This typically is a smooth bell-shaped profile, where speed increases from zero to a maximum and then back to zero. We measure the magnitude of Peak Speed, and the timing; when in the reach does it happen? </span></li><li><span style="font-family: arial;">The grip aperture size. This aperture goes from zero to a Maximum Grip Aperture (MGA) then down to a Terminal Grip Aperture (TGA; the size when the hand stops moving) and then a Final Grip Aperture (FGA; when the object is grasped). The timing of the MGA can also be noted.</span></li><li><span style="font-family: arial;">The grip aperture orientation (Pitch, Roll, and Yaw angles).</span></li><li><span style="font-family: arial;">Total Movement Time (MT)</span></li><li><span style="font-family: arial;">Path Length (distance travelled, compared to the straight line distance; usually only interesting if there are obstacles and therefore path options). </span></li></ul></div><div><span style="font-family: arial;">Of course, the exact details vary from trial to trial. This is due to <a href="http://psychsciencenotes.blogspot.com/2023/05/motor-abundance-affordances-for.html" target="_blank">redundancy</a>, in part, but also, importantly, task details. Like all movements, reach-to-grasp movements are exquisitely scaled to the current demands, which means they are being created online as a function of information about those demands. </span><span style="font-family: arial;">These movements therefore have a distinct but task sensitive spatiotemporal structure, and we can use variation in that structure as a function of task changes to probe what changes matter and why. The next post will detail the task dynamical analysis that shapes those experiments.</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-85499681269516116482023-05-04T10:24:00.002+01:002023-05-04T10:24:51.023+01:00Motor Abundance & the Affordances for Reaching-to-Grasp<p><span style="font-family: arial;">Movements are never the same twice, even when you are trying to do that same thing over and over. Variability is an inescapable fact of trying to organise and run a complex system such as a human body. But there is more than one source of variability in movement; there's noise, and then there's redundancy, and these are not the same thing. </span></p><p><span style="font-family: arial;">Our movement systems are <b>redundant</b>; specifically, they always have more degrees of freedom available than are ever required to perform a given task. This means that there is always more than one way to perform any given task, and this can range from slight variations to complete reorganisations. </span></p><p><span style="font-family: arial;">Redundancy is a feature, not a bug. It means that we can reliably achieve a task goal in the face of perturbations that range from trial-to-trial fluctuations in execution up to surprises like tripping or the sudden appearance of an obstacle. However, it poses two related control problems. First, a problem of <i>action selection</i>: given that there are many functional organisations of degrees of freedom that could solve that task, which do we choose, and why? Second, a problem of <i>action control</i>: once we have our degrees of freedom organised, we still have some left over that need to be actively controlled; how do we do this, and why do we control them the way we do?<span></span></span></p><a name='more'></a><p></p><p><span style="font-family: arial;">Part of the answer to these questions is the formation of a <b>synergy</b>. A synergy is a particular kind of way of organising system degrees of freedom, such that they work together to compensate for variability. A simple example is pressing down with two fingers (2 degrees of freedom) to produce a single total force (e.g. 20N; a 1 degree of freedom task). The system is redundant (2DOF working to control 1DOF) and so there are many ways to achieve the task (one finger could do 5N, the other 15N; or they could both do 10N, and so on). Trying to maintain a single force output is tricky though, and there will be variability. A synergy is present if, when one degree of freedom slips up, the other one automatically changes it's behaviour in a way that preserves the outcome goal. When a synergy is present, therefore, most of the variability will be along the line that describes where the successful outcomes live in the space of possible outcomes. In addition, no control needs to be exerted to control this variability away, because it's just the synergy doing what it was built to do (hence this particular subset of possible outcomes is often referred to as the <i>uncontrolled manifold</i>). </span></p><p><span style="font-family: arial;">So, which functional organisation of system degrees of freedom should we select? <i>The one that operates as the synergy that compensates for variability in a way that preserves the task goal.</i> If we select that synergy, it makes the control problem straight-forward (or at least, manageably low-dimensional). </span></p><p><span style="font-family: arial;">There are a variety of analysis methods that work to decompose variability in the performance of a task to look for evidence of a synergy at work; if most of the variability is constrained in a way that reflects the task goal, then a synergy is active. These methods include the <a href="http://psychsciencenotes.blogspot.co.uk/2016/09/uncontrolled-manifold-analysis.html" target="_blank">uncontrolled manifold analysis</a>, <a href="http://psychsciencenotes.blogspot.co.uk/2016/10/optimal-feedback-control-and-its.html" target="_blank">optimal feedback control</a>, <a href="http://psychsciencenotes.blogspot.co.uk/2016/10/nonlinear-covariation-analysis-muller.html" target="_blank">nonlinear covariation analysis</a> and <a href="http://psychsciencenotes.blogspot.com/2018/09/tolerance-noise-and-covariation-in.html" target="_blank">tolerance-noise-covariation</a> analysis. </span></p><p><span style="font-family: arial;">I have recently been working with the <a href="https://psyarxiv.com/u4j2g/" target="_blank">uncontrolled manifold method in the context of targeted throwing</a>, and using it to develop a detailed analysis of the task as well as trying to connect it to the perception of affordances. This work has clarified something very important for me: I really like the underlying motivation for the motor abundance methods, and I think they are all trying to meet the problem of action control head on. However, they are all, in general, completely underconstrained methods right now, because none of them come with a theory of what determines how the perception-action system identifies the synergy to enact. They can all spot one in the data if it's there, but they don't explain how it came to be. So, <a href="http://psychsciencenotes.blogspot.com/2011/05/perception-action-dynamical-systems.html" target="_blank">like dynamical systems</a>, what we have is a good toolkit in search of a good theory, and, as usual, the solution is the ecological approach; specifically, the perception of affordances. </span></p><h4 style="text-align: left;"><span style="font-family: arial;">Affordances Define the Task Goal</span></h4><div><span style="font-family: arial;">Recall that all these methods decompose variability with respect to a task goal. Evidence for a synergy being present comes when this process shows most of the variability has been constrained to live along the solution manifold for that task goal; the subspace in the space of all the things you could be doing right now that allows you to complete the task. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">These methods currently come with researcher degrees of freedom in two places. First, the researcher gets to identify the synergy they think is currently at play (and based on my reading, researchers rarely explicitly justify this beyond some simple biomechanical considerations, if they even go that far). In UCM, these are called the elemental variables. Second, the researcher gets to identify what it is they think this synergy is designed to control; what variable(s) the degrees of freedom are automatically compensating to maintain. In UCM, these are called the performance variables. The analysis methods provide no systematic constraints on these; in principle this is useful, because you can use UCM on a variety of candidate combinations of elemental and performance variables to empirically figure out what a given task requires. In practice, though, people tend to just run and report one or two, without ever justifying the selection beyond a simple task analysis. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Ecologically, a task analysis is about identifying the affordances of a task. This is a usefully constrained way of analysing a task - it's about identifying the unavoidable dynamical facts of the task space, and seeing what they offer. </span></div><h4 style="text-align: left;"><span style="font-family: arial;">The Research Proposal</span></h4><div><span style="font-family: arial;"><a href="https://cognitioninaction.files.wordpress.com/2018/11/wilson-zhu-bingham-the-ecological-approach-to-throwing-for-distance-and-accuracy.pdf" target="_blank">In a chapter about throwing affordance research</a>, I lay out the plan: use a task dynamical affordance analysis to identify the task goal and thus meaningfully constrain the search for synergies in movement. <a href="https://psyarxiv.com/u4j2g/" target="_blank">In my recent first attempt</a>, I used throwing because I had the task dynamical analysis (<a href="https://cognitioninaction.files.wordpress.com/2016/07/wilson-et-al-in-press-using-task-dynamics-to-quantify-the-affordances-of-throwing-for-long-distance-and-accuracy.pdf" target="_blank">from this paper</a>) and in 2017 I had the chance to collect the data with colleagues at the Carnegie sports science group here. I like this paper; but it is not quite fully fit for purpose, and the task is, I think, on the ambitious side for a first swing. In addition, the data collection for this kind of study takes a lot of time and specialised equipment and lab space, which is not impossible but makes the study harder to run. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">So, coming of writing the throwing paper and now having a clear idea of what I need, I am developing a research programme that will use a simpler task I can more easily collect data for, that will allow me still to do what I want, namely connect motor abundance measures to an affordance analysis and show they work well together. I also see <a href="https://www.tandfonline.com/doi/full/10.1080/10407410701557877" target="_blank">Fajen's affordance-based control</a> having a key role here. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I have selected reach-to-grasp as my task, because I have the kit available to run the studies and it also has an existing task dynamical affordance analysis in the literature. Bingham has been using this task for decades, and there are three papers in particular that develop the affordance analysis, develop the complementary effectivity analysis, contains empirical data from relevant experiments, and models the data from an affordance perspective (<a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2011_Mon-Williams_&_Bingham_(2011).pdf" target="_blank">Mon-Williams & Bingham, 2011</a>; <a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2014%20Bingham%20et%20al%20b.pdf" target="_blank">Bingham et al, 2014</a>; <a href="https://palab.sitehost.iu.edu/Resources/Publications/2010-2019/2019%20WangBingham_2019.pdf" target="_blank">Wang & Bingham, 2019</a>. My first job is to get fully up to speed on the full details of this programme, so I will blog the papers over my next few posts to develop the story. I have already encountered some relevant other papers. In particular, this one from <a href="https://journals.humankinetics.com/view/journals/mcj/3/3/article-p237.xml" target="_blank">Smeets & Brenner (1999</a>) that proposes a very different effectivity story. Bingham has data against it, but UCM etc are supposed to be able to test hypothesised control architectures and having clear alternative hypotheses to examine will be a good part of this programme. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">So please keep any eye on the blog and my Twitter feed over the next few weeks. I'm going to be developing questions I'll need help answering, and I definitely need people to work with here, particularly people with expertise the various motor abundance methods. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-15968901306630286042023-03-07T17:40:00.001+00:002023-03-07T17:40:13.878+00:00What Science Has to Offer the World of Coaching<p><span style="font-family: arial;">It started, as these things always do, on Twitter. Someone posted a training drill they were excited about (kids kicking a ball against a wall) and I made a comment to suggest I didn't think this was the most football-useful activity I had ever seen. That's all I intended to say, but enough people (coaches, mainly) got mad at me that more was said, and it quickly devolved into the standard entrenched lines this argument lives between.</span></p><p><span style="font-family: arial;">There were two related themes to the replies. The first was essentially 'how dare you'; apparently questioning a coach's practice crosses a line (I admit I had been a little snarky, but only a little, I promise!). I find this response quite hard to understand: if you share your practice then it seems fair to expect not everyone will love it, and some may have sound reasons to think it's a bad idea, and I think that should be ok. </span></p><p><span style="font-family: arial;">The second theme was then 'what makes you think you can comment?'. This one mostly showed up in the form of demanding to know how much coaching I had done, a question designed to get me to admit 'none' and allow them to simply dismiss my view as that of an ill-informed outsider. This one annoyed me more than the rest, so I wanted to explain how and why I, an academic non-coach, gets to be a part of the conversation around training activities. </span></p><p><span style="font-family: arial;">First, let's try to set a positive tone. I value the experience and understanding that coaches have by virtue of their practice. Of course this expertise places you in a key position in the conversation. All I want to develop here is some reasons why other people, including people like me, also get to take part in that conversation, because we have relevant expertise and things to contribute. So let's talk about what science and scientists have to offer to the discussion about training environments.<span><br /><br /></span><span></span></span></p><a name='more'></a><p></p><h4 style="text-align: left;"><span style="font-family: arial;">1. What is my expertise?</span></h4><div><span style="font-family: arial;">I am not a coach, so what <i>do</i> I (and the many people like me) know? Well, personally, I have been doing lab research on <i>skilled action</i>, <i>learning</i>, and <i>transfer of learning</i> for 20 years or so (I got my PhD in 2005). I have spent those years studying the perception-action mechanisms of skilled action; I've developed methods and measures for probing those systems and uncover what they are made of, and I have many peer-reviewed papers about those experiments and what they teach us about those mechanisms. In addition, I have spent a huge amount of time becoming an expert in an underlying <i>theory</i> of these perception-action mechanisms, specifically the ecological approach. I have written extensively about this theory, developing it to new topics, and applying it as rigorously as I can to the empirical study of these mechanisms. Academia is filled with people with similar journeys and expertise.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Why is any of this relevant? Because sports coaching is an attempt to shape and guide the <i>learning</i> of physical skills (to make that process work as well as possible) and also, centrally, an attempt to shape the <i>transfer</i> of this learning, from the training environment to the game environment. No one trains people by simply placing them in the full game environment, so coaching is about designing contexts in which people can focus on some <i>fundamental</i> aspect of the game, and a good training session is therefore one that promotes the transfer of that more restricted activity to the full scale game. The kind of work I do is about exactly this question, so it seems like there's an obvious contribution here; let's talk about some of that now. </span></div><h4 style="text-align: left;"><span style="font-family: arial;">2. The Question of Transfer</span></h4><div><span style="font-family: arial;">Everyone engaged with the question of learning and transfer (coaches and researchers alike) all agree on one thing: <b>learning is predicted to transfer to the extent that the learning context and the transfer context overlap in some meaningful way.</b> Remember that a training context is a reduced form of the game day context, so for learning in the former to transfer to the latter, the training context has to contain at least one of the <b>real parts</b> of the game day context; this is how they need to overlap. In order to design a sensible training context, you therefore need some reason to decompose the game day context into a particular set of parts - you need a <i>theory</i> of what parts make up the game day context. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Coaches often rebel at the word theory, but like it or not they all have one about what counts as a real part of the game day context. If you prefer, another way to think about theory here is just to point out all coaches have a set of organising principles that make them think one way of doing things is better than another; they have reasons for training <i>this</i> rather than <i>that</i>. You are not, I am assuming, simple doing things at random. Instead, you are targeting 'the fundamentals' - what your theory/organising principles tell you are the real parts of game day performance. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The science of what skills are, how they are learned, and how they transfer is immediately relevant to the design of good sessions, because it is all trying to provide an evidence base about what a 'fundamental aspect of a task' actually is; about what counts as 'meaningful overlap'. So what does the science say?</span></div><h4 style="text-align: left;"><span style="font-family: arial;">3. The Bad News</span></h4><div><span style="font-family: arial;">The main thing science has learned is what we don't know. Scientists have been decomposing tasks into parts and investigating transfer since Thorndike & Woodworth (1901), and as a general rule the results have been uninspiring, to say the least. This was reviewed in a <a href="https://apps.dtic.mil/sti/pdfs/ADA167912.pdf" target="_blank">paper for the Army by Schmidt & Young (1986)</a> who identified that a) it's very hard to identify what a task is and when it has changed enough to become a different task, and b) that most research shows that even tasks that seem very similar show little transfer between them and performance doesn't even correlate much between them, suggesting that aren't, in fact, very similar. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The key here is when I said 'tasks that <i>seem</i> very similar'. It turns out that since 1901 researchers have been conceptualising tasks according to common sense intuitions, rather than any key set of principles. Two tasks are considered similar because it seems to make common sense that they would both require the same process in order to work. But time and again this basic idea has been shown not to work; since 1901 there have been a chain of 'boom-then-bust' cycles as people try to characterise tasks, test their particular common sense approach, find it fails, give up, only for it all to repeat 20 years later. A simple example is the idea that balancing on two different surfaces (say, a beam vs a slack line) surely both entail 'balance'; but the data very clearly says otherwise (e.g. <a href="https://link.springer.com/article/10.1007/s00221-017-5072-7">Serrien et al, 2017</a>). There is no transfer of learning between such tasks, which suggests that 'balance' might not be a single piece of the various skills; it might not be a 'fundamental'. Even simple changes (like increase the RPM in a pursuit rotor task) can restrict learning at one speed transferring to the other; the transfer and savings may not be zero, but it is never as good as it 'should' be, given our intuitions about task similarity.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This all matters because how coaches decompose tasks so as to target training is all informed by the same intuitive understanding of what those tasks are made of (even if a coach is reading the research, remember that research is based on the intuitions too). You see it in the way they talk; kicking a ball against a wall has value because 'kicking' is <i>obviously</i> a fundamental part of football; the 'obviously' is always there in the discussion. <b>But the point is this</b>: the data so far actually shows that we don't have a firm footing for that 'obviously' yet, not even firm enough to agree 'kicking is fundamental to football'. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This seems like a strike against science; remember, I was saying that one of our key contributions is to provide an evidence base about what tasks like full games are made of. A coach might therefore say 'I have been drilling fundamental skills for 20 years and my players have improved', or 'if kicking against a wall was good enough for Dennis Bergkamp it's good enough for me'. Maybe coaching practice is revealing the true story and science is just failing because we're locked away studying dinky lab tasks. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The reply is actually simple: the science is still telling us something important, even though it hasn't solved the problem yet. Science has carefully tested these questions under more controlled circumstances and shown that the 'obvious fundamentals' identified by our intuitions aren't generally the right ones. That's an important thing to know, because it suggests that whatever it was that helped your players improve, <i>it probably wasn't the isolated drills targeting things you think are obviously fundamentals</i>. But sports coaching entails a lot of other activities besides isolated drills and coaches point to these as well. Scientifically, the failure to find those 'obvious fundamentals' yet just means it's time to look at all the other things you can do in a training session, and it's important to do so carefully so you don't invoke mere anecdotes riddled with survivor bias as evidence for your practice (science help not just with the data but also brings rigour to the data collection process). </span></div><h4 style="text-align: left;"><span style="font-family: arial;">4. The Better News</span></h4><div><span style="font-family: arial;">So far we've ended up in a fairly grumpy place that's going to bug a lot of people, so let me try to reformulate this a bit more optimistically:</span></div><div><ul style="text-align: left;"><li><span style="font-family: arial;">Most people who go through coaching come out better players</span></li><li><span style="font-family: arial;">A few of these people come out as great players</span></li></ul><div><span style="font-family: arial;">So what happened during that coaching has clearly done <i>something</i> useful. The question is what - which bits were the most useful bits?</span></div></div><div><ul style="text-align: left;"><li><span style="font-family: arial;">Science has been trying to figure this out for 120 years and has learned that most of the ideas about which bits are the important bits don't work, at least not well</span></li><li><span style="font-family: arial;">Most coaching is guided by the same ideas tested by science, either because the coach shares the intuitions or because they got them from the research, so whatever it is they are doing that's helping, the evidence suggests its not what they think they're doing</span></li></ul><div><span style="font-family: arial;">Here's the optimistic bit: what we have here is not a disaster, but a <i>challenge</i>. Coaches don't like being challenged (or so I've learned on Twitter - this drove the 'how dare you' type replies), but scientists love it - a puzzle yet to be solved is what we live for. So this is one key thing I think science has to offer sports coaching: let us rise to the challenge for you! If you have something you think is 'obviously a fundamental', let us actually test it for you, see what happens! Work with us to provide us with the questions we can develop the evidence for and against, so that when you say 'well obviously X is fundamental' you will actually have clear reasons you can point to to back yourself up. All we ask is that you listen to the answers, even (or perhaps, especially) when they don't back you up. </span></div></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We also have a lot of new ideas about what counts as a fundamental. We have 120 years of data showing us <i>this</i> way and <i>that </i>way of carving tasks up doesn't work, and one response to these kinds of failures has been to work to find <i>other </i>ways of carving up tasks and testing those. Currently, the main other way is based in ecological psychology. The unit of analysis here is not the organism, but the organism-environment system, and so (we propose) whatever the fundamentals of a task are, they will live at that scale, and not at the scale of the individual player; not 'balance', but 'balancing-on-a-beam', that sort of thing. This implies that training should happen in representative organism-environment systems, hence the worries about kicking balls at walls, etc. This will either work or it won't, but I'll defend this sort of thing elsewhere; at this point I'm just trying to argue that science in general has a lot to contribute and part of that is candidate new ways to carve tasks up, to replace the ways we know don't work well.</span></div><h4 style="text-align: left;"><span style="font-family: arial;">Summary</span></h4><div><span style="font-family: arial;">We're all just trying to engage with the process of learning and transfer in good faith. Scientists, because we want to know how the damn thing actually works; coaches, because you want to do right by your players and guide them well. We are all in this together, and we all have a role to play, so let's start the discussion by accepting that. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">So what is the role of science? <b>To ask good questions carefully, and generate data that speaks to that actual question.</b> Doing this has led to a frustrating place: we know a lot about what doesn't work. This counts as useful information, though! Doing this has also led to a very different place: the whole point of ecological dynamics, for example, is that it summarises a more promising way to conceptualise what tasks are and how to train real parts of them. But however this discussion pans out, the point of this post is to advocate for science having a meaningful role to play in the discussion of what counts as good coaching. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com1tag:blogger.com,1999:blog-9192597712746432631.post-17861568876934238922023-01-06T12:49:00.004+00:002023-01-06T12:49:39.210+00:00Lecture 9: The Space Enigmas II: Kant, the Nature of Geometry, and the Geometry of Nature (Turvey, 2019, Lectures on Perception)<p><span style="font-family: arial;"><a href="https://psychsciencenotes.blogspot.com/2022/06/lecture-8-space-enigmas-i-berkeley.html" target="_blank">The first space enigma</a> was the fact that vision lives in the two dimensions of Flatland, but produces an experience of three dimensional Spaceland. You can't logic or experience your way from Flatland to Spaceland (as described in the <a href="https://en.wikipedia.org/wiki/Flatland" target="_blank">famous book</a>). Berkeley tried to solve this problem by providing a guide, in the form of the Spaceland-dwelling body, but this fell apart and the only remaining suggestion was an unrepayable loan of intelligence from God. </span></p><p><span style="font-family: arial;">Another way to consider this problem that leads to another proposal is what Turvey calls 'the outness problem'. This is the annoying fact that sensations on the retina are experienced as things out there, in the world. This makes space a necessary precursor to perceptual experience: however the outness problem is solved, a notion of space is required to drive the search for a solution. Kant is the main person who worked to establish how space might be baked into perception; 'Space, therefore, is not an object of perception...but something very different, namely, a mode of perceiving objects' (Turvey, 2019, pg 124). Spoiler alert: it doesn't work, for interesting reasons that feed into the development of the ecological analysis. </span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><p><span style="font-family: arial;">How can we come to know what is true or false? There are two basic ways. An <i>analytic </i>claim is one that can be evaluated as <i>necessarily </i>true or false <i>a priori</i>, without any sort of empirical investigation, just the application of logic (e.g. 'an orange is a fruit'). A <i>synthetic </i>claim brings two or more pieces of information together and must be empirically evaluated to see if it is <i>contingently </i>true or false, so <i>a posteriori</i>. The notion of space as a necessary precursor to perception requires something different, specifically a <i>synthetic a priori</i> analysis, and this was where Kant came in. </span></p><p><span style="font-family: arial;">He begins with <a href="https://plato.stanford.edu/entries/kant-spacetime/#MetaExpo" target="_blank">four metaphysical claims </a>about the nature of space:</span></p><p></p><ol style="text-align: left;"><li><span style="font-family: arial;">Space is presupposed by sensory experience, not derived from it (spatial terms like 'next to' don't emerge from the experience of two things next to each other, that experience is the way it is because of a notion of space)</span></li><li><span style="font-family: arial;">Space is a necessary precondition to perception of objects (because you can imagine no objects in space but you cannot imagine no space)</span></li><li><span style="font-family: arial;">Space is a single thing; it's not made up of 'the space here + the space over there'</span></li><li><span style="font-family: arial;">Space is not a concept; we consider it as an infinite thing, and in order for it to be a concept we would have to be able to conceptualise that infinity. </span></li></ol><div><span style="font-family: arial;">He concludes with a <a href="https://plato.stanford.edu/entries/kant-spacetime/#TranExpoKantConc" target="_blank">transcendental argument</a> proposing a synthetic a priori claim: <i>perception of space is necessarily Euclidean</i>. The idea here is that geometry is the set of formal propositions about space, and mathematical propositions are always a priori type claims (they entail a necessity that isn't derived from experience). However, geometry is also synthetic: it brings together a set of information (axioms) in order to be a complete claim. At the time Kant wrote, geometry meant <a href="https://en.wikipedia.org/wiki/Euclidean_geometry" target="_blank">Euclidean geometry</a>, a set of five axioms from which you can derive everything you need to characterise space. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The practical upshot is that Euclidean geometry is the kind of claim that can be the basis for a notion of space that fulfils the metaphysical requirements. Space is not a thing, it simply is Euclidean geometry, and given that this geometry had been the only coherent geometry for 2000 years, it seemed like we have a pure-reason solution for what space is and how it forms the basis of experience. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Non-Euclidean Geometry</span></h3><div><span style="font-family: arial;">This entire argument falls apart, however, because it turns out there is more than one internally coherent and complete geometry (set of axioms about space). The 5th axiom asserts that there is one and only one way for two lines to be parallel to one another, but it turns out that if you relax this claim in different ways you get different geometries that are inconsistent with Euclidean geometry but are still internally coherent. Geometry is no longer an a priori fact, and which geometry best describes a given space is an empirical (a posteriori) question. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">As soon as geometry became an empirical game, the key purity of it fell away. The various axioms are about abstract, idealised forms such as points, lines and planes, none of which exist in the real world, so there were now limits on the axioms imposed by the details of the world and the resolution of the measuring system. Then people started trying to empirically establish how many dimensions space has and found that fractional dimensionality was an option and everywhere: fractal geometry had arrived. Sizes, shapes, and distances were suddenly not necessarily definite things, but functions of how they were measured, and perception is an act of measurement using wide varieties of embodied systems. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The practical upshot is that it no longer makes any sense to talk of 'space' as if it is a single, coherent thing, because there are <i>multiple </i>coherent ways of describing it and which one applies to a given question is an empirical question. Finding which one applies to the question of the perception of space is also an empirical question, and what space is is now a negotiable thing for an organism. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Reflection</span></h3><div><span style="font-family: arial;">One of the key themes emerging over these chapters is that the history of the study of perception is a story of attempting to derive experience from theories of physics and geometry that simply are not up to the task, and instead of noticing that the theories are inadequate, trying to find solutions that still live within the bounds of that theory (often because the theory was overly revered). The recent (last 150 years or so) history of science has involved a lot of theory proliferation: physics and geometry have both produced a large number of more complex theories that are more up to the challenge of explaining the facts at hand (by facing up to what the data was telling them instead of treating the inadequate theory as some magnificent truth). Psychology has access to these more interesting theories, but has yet to fully embrace any of them, and as a result is still flailing around making the same mistakes. The promise of the ecological approach is that it is going to built on the basis of these more interesting theories, and that is why we're going to succeed. </span></div><p></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-68663856948633272112022-12-13T12:55:00.002+00:002022-12-13T12:55:16.210+00:00Trip Report from the Uncontrolled Manifold<p><span style="font-family: arial;">I've spent the past few months getting a new paper to the 'complete first draft' stage (<a href="https://docs.google.com/document/d/1iE8KoALnuhLmrl2j-E4YjXG-5X9aBrWy/edit?usp=sharing&ouid=116823180446205604211&rtpof=true&sd=true" target="_blank">you can find a copy here in the meantime</a>; it's still got some work to do though). It's about affordances, using targeted long-distance throwing as the task, and it's my first dip into the world of the uncontrolled manifold. I collected this data over five years ago, and it's been deeply satisfying to actually use it after all this time.</span></p><p><span style="font-family: arial;">Part of what's taken so long is that I've had to learn the details of the uncontrolled manifold analysis. I blogged some about it <a href="http://psychsciencenotes.blogspot.com/2016/09/uncontrolled-manifold-analysis.html" target="_blank">here</a> and <a href="http://psychsciencenotes.blogspot.com/2016/10/what-can-you-do-with-uncontrolled.html" target="_blank">here</a> but this was the year I finally had the time and data to actually get into the maths. I still really like it as an approach to analysing human movement, but learning the details and trying to figure out how to get affordances into it has raised a lot of interesting questions about how it gets used right now and what this all implies for how we think movement is controlled. I'm raising a bunch of these issues in the paper but I wanted to sketch some out here for comment.</span></p><p><span style="font-family: arial;">Broadly (and this shouldn't be a surprise to anyone really) I've realised that UCM is only a method, not a theory, and it's therefore not able to serve as a 'guide to discovery' about movement control. However, it's being used as if it can, and to be honest I was quite shocked at how carelessly it's being used in the literature. <span><br /></span></span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h3 style="text-align: left;"><span style="font-family: arial;">A Quick UCM Primer</span></h3><div><span style="font-family: arial;">Here's the basic idea. Movement systems are redundant: there are always more biomechanical degrees of freedom available than are required to solve any given movement task. This redundancy is a feature, not a bug: it means we always have a <i>set</i> of movement solutions available which provides critical flexibility in a busy world. This fact comes with an implication: some movement variability will keep you within the set of actual solutions (a subspace called <i>the uncontrolled manifold</i>), while some will take you out of that set (the subspace <i>orthogonal </i>to the uncontrolled manifold). UCM is a mathematical analysis of movement variability into these two components, V(UCM) and V(ORT). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The analysis therefore needs two things fed into it: variable movement data from the movement system you are currently studying (called the <i>elemental variables</i>) and a set of variables representing what that movement system is trying to produce (called the <i>performance variables</i>). You decompose variability in the elemental variables with respect to the performance variables, and if most of the variability lives in the uncontrolled manifold subspace, this is evidence that the elemental variables are being controlled as a <i>synergy</i> who's goal is the production of that performance variable. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This analysis therefore lets you test hypotheses about how a given action is controlled. You identify a candidate synergy of elemental variables, decompose the variability in the operation of that system with respect to a candidate performance variable, and if the <i>index of synergy</i> - V(UCM)-V(ORT)/V(TOT) - is >0 you have evidence that you have found the right synergy and the right performance variables. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I like this analysis because it was specifically designed to embrace the complexity of the operation of a redundant synergy; in other words, it is trying very hard to be a valid way of analysing the movement system and I fully approve. Here's my concern, however: the entire process is completely underconstrained unless this method is used to test hypotheses from theory, and to a disturbing extent it is not used this way. All of the various decision points are researcher degrees of freedom, and there are few if any constraints on how these decisions are made. Worse, the reporting in many papers never explicitly justifies any of the decisions. This is a recipe for a replication crisis, at the very least.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">The Problem of Selecting Elemental Variables</span></h3><div><span style="font-family: arial;">UCM analysis embodies the hypothesis that redundant movement systems become controllable by being softly assembled into task-specific synergies (Latash, 2008). A synergy is a particular kind of arrangement of dynamical properties (elemental variables) in which they are coupled together so they can be driven as a single device. The goal of skill acquisition is to learn how to softly assemble a synergy who's behaviour lives within the uncontrolled manifold for a given task. You have evidence that such a synergy is operating if most of the variability in the operation of that synergy is V(UCM) (see the index of synergy). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">For my throwing study, we measured the joint angles of the shoulder-elbow-wrist system, with the angles expressed in an internal frame of reference. We did this basically because that's what everyone else does, and because the biomechanics of the throwing action provides some useful constraints here, but this decision is not fully locked down and comes with a lot of implications.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">First, the frame of reference. <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000751" target="_blank">Sternad, Park, Müller & Hogan (2010)</a> used UCM to analyse joint angle data from the same set of elemental variables expressed in either an internal (body centred) or external frame of reference. The resulting decompositions were completely different! They use this result to advocate for their Tolerance-Noise-Covariation (TNC) analysis, which constrains the analysis using simulations of task dynamics (I like this analysis too and I am working to implement it for my throwing task; more on that later). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Second, which joint angles to use? The goal is to identify the synergy you think the movement system has softly assembled itself into in order to solve the task demands (<a href="https://palab.sitehost.iu.edu/Resources/Publications/1980-1989/Bingham1988.pdf" target="_blank">Bingham (1988)</a> and therefore I have called these <a href="http://psychsciencenotes.blogspot.co.uk/2011/06/task-specific-devices-and-perceptual.html" target="_blank">task-specific devices</a> in the past). Biomechanists studying throwing tend to focus on the shoulder-elbow-wrist system because it is patently the case that it's required for a throw; but sometimes they include the fingers, and sometimes they include lower limb joint angles too because arms don't just hang in space. No one ever justifies their selection. Of course, you could use UCM analysis to empirically identify the synergy by defining several different ones and see which one works best, although this will depend on having the right performance variables and that selection process is less constrained that this one. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">To summarise: biomechanics helps here, but the selection of elemental variables (which at this point I will note are the same as <i>effectivities</i>) is not sufficiently constrained by the UCM method and this shows up in practice. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">The Problem of Selecting Performance Variables</span></h3><div><span style="font-family: arial;">How appropriate a synergy of elemental variables is depends on whether it's operation produces performance variables that solve the task. Selecting good performance variables means you need to know what the task is, and what specifically the action system needs to be doing in order to solve it. Defining your task is, surprisingly, a very difficult thing to do with no clear solution (all my work is about developing the ecological task-dynamical analysis as a solution; I have a detailed review of the problem and solution in <a href="https://link.springer.com/content/pdf/10.1007/s00426-020-01308-1.pdf" target="_blank">Leach, Kolokotroni & Leach, 2020</a>). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Given this basic problem with task definitions, it's not actually surprising to see huge variability in which performance variables people choose. Throwing work has used shoulder position, hand position, hand orientation, finger position, finger orientation, and hand velocity; different papers use different ones, and measure them in both internal and external frames of reference (note: this leads to a wide variety of results, none of which build or develop on each other). What worries me, however, is that the selection is rarely justified at all; at best, there will be a sentence along the lines of 'well it seems like throwing probably needs this variable to go well' which is precisely the sort of intuition based task analysis that led to the problem analysed in Leach et al. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Ecologically, a task is defined by <i>the perception of affordances</i>. The point of my paper is to see if my task-dynamical affordance analysis from <a href="https://cognitioninaction.files.wordpress.com/2016/07/wilson-et-al-in-press-using-task-dynamics-to-quantify-the-affordances-of-throwing-for-long-distance-and-accuracy.pdf" target="_blank">Wilson, Weightman, Bingham & Zhu (2016)</a> could help constrain this process. That analysis defined the affordance of the target to be hit as a set of release parameters (release angle, speed, and height, all defined in an external frame of reference) and I used the mean release parameters for a given target distance as three of the performance variables we tested as my first attempt to use task-dynamical affordances as a constraint on UCM analysis. Spoiler, it sort of works (although it is, of course, complicated)! We compared these to hand position, orientation, and velocity defined in either an internal or external frame of reference. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Here's a bunch of questions I came up with as I interacted with doing these analyses:</span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">We had nine performance variables that were intuitively plausible options. The UCM analysis results helped us rule some in and out, but it's wild that there are so many options and that we could have simply written a paper where we tested and reported a couple we chose for no clear reason. This researcher degree of freedom issue in practice and reporting is a recipe for a replication crisis. </span></li><li><span style="font-family: arial;">It was almost impossible to identify what counted as a meaningful index of synergy. Across the trial, almost all the variables produced indexes that were >0, but some were clearly higher than others and some oscillated close to 0. In order to say I have evidence that my synergy of elemental variables is organised so as to produce that performance variable, is >0 enough or do I need some notion of 'significantly >0'? I definitely think the latter, and I imposed a threshold of 0.5 in my analysis, but I could find no way to justify that decision beyond it seeming about right based on the graphs (<a href="https://www.tandfonline.com/doi/full/10.1080/02640414.2022.2150419" target="_blank">Singh et al, 2022</a>, set a threshold of 0.15 on the z-transformed index, which made sense but again had no justification). No one else even mentioned it.</span></li><li><span style="font-family: arial;">Relatedly, most papers just look to see if the index of synergy varies either between conditions or between performance variables. The former assumes you have the right elemental and performance variables and is looking to see how the task manipulation affects the synergy; the latter assumes you have the right elemental variables and is testing to see which performance variable it seems to be working to produce. These are sensible approaches, but it reveals another thing I learned: the same synergy can produce indexes of synergy >0 for multiple performance variables! Some will be higher than others, but again, without a notion of 'significantly >0' interpreting this gets complicated, plus what does it mean for a given synergy to be working to control multiple performance variables? </span></li></ol><div><span style="font-family: arial;">I also identified another issue working with these variables that is less about analysis degrees of freedom and more about unspoken assumptions about the movement control process. </span></div></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We performed the UCM analysis at each timestep of the normalised time series of joint angle data to produce a time series of V(UCM), V(ORT) and therefore of the index of synergy. For the hand position, orientation, and velocity performance variables we fed the UCM analysis the values of both the elemental and performance variables at time t. For the release parameter variables, we had to feed the UCM analysis the values of the elemental variables at time t but the values of the performance variables at time <i>end</i>. This is ok at this stage, but it made me realise that the way most people use UCM (the first way) embodies the hypothesis that variability in the elemental variables at time t is organised with respect to producing the values of the performance variables at time t. This may be true, but to be honest I don't think it is and I don't think it's very compatible with the equilibrium point hypothesis that also features prominently in this part of the literature. That hypothesis explicitly states that we aren't exerting control at all times in the movement, but that instead we set a new equilibrium point for the muscles around a joint and let the local limb dynamics take care of moving there. This, to be honest, is why my affordance based analysis using the release parameters makes good sense to me; the affordance defines what the synergy must produce and a good synergy is one that produces those via equilibrium point control. I need to think this one through more, but I think it's important. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Summary</span></h3><div><span style="font-family: arial;">I like UCM analysis, and I think that all these researcher degrees of freedom are actually features, not bugs - it means we can use UCM to test all kinds of hypotheses about which elemental and performance variables matter. But what worries me is that it doesn't seem to get used this way. Sports science in particular seems to spend most of it's time just collecting and analysing data without thinking too hard about why it chose that data to collect and analyse. This (long!) post contains many questions and concerns I have with how the method is used. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Of course, I have a solution, and it's the ecological approach as a source of hypotheses to test. This paper is my first swing at connecting affordances and UCM analysis, but there is much more I need to do. That said, I think applying UCM explicitly will force us to ask (and hopefully answer) important questions and engage with the best way to formally identify affordances and to get serious about effectivities, because the former defines performance variables and the latter defines elemental variables and making this link will unleash the potential of UCM analysis on our field. Once I get TNC and all the others working too, then we'll <i>really</i> see some progress!</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-5073588530807480982022-09-27T11:16:00.001+01:002022-09-27T11:16:31.006+01:00Are Illusions Even a Thing?<p><span style="font-family: arial;">Traditional vision science is very excited about illusions. These are cases when perception seems to break down; there is a mismatch between what is out there and what we experience, and traditional approaches consider these breakdowns as clues to how vision has to work, given what it is working with. </span></p><p><span style="font-family: arial;">Ecological psychologists don’t like illusions. Typically, they occur when information is either made ambiguous or faked, and in general we think these are the wrong situations to study perception in. We sometimes engage with the literature on these effects, but usually to show how the trick is the result of not thinking ecologically. </span></p><p><span style="font-family: arial;"><a href="https://www.frontiersin.org/articles/10.3389/fnhum.2022.957740/full" target="_blank">Rogers (2022)</a> has taken this basic analysis but gone one interesting step further. He’s argued that the notion of ‘visual illusion’ is simply not a clear category; it’s not a useful way to describe any of the effects people study. He argues that there simply is no sufficient definition of what an illusion is that works, and that what we call illusions are just either tricks (as above) or inevitable consequences of how the visual system works. </span></p><p><span style="font-family: arial;">I am broadly on board with this additional step, and it’s made me think hard about what illusions are and how best to respond when people use them against direct perception. <span></span></span></p><a name='more'></a><p></p><p><span style="font-family: arial;">Rogers first argues that no-one actually has a good account of what an illusion even is. The standard definition is Gregory’s: an illusion occurs when there is a mismatch between what is really out there, and what we experience. An illusion is an <i>error</i>. (Rogers also spends some time looking at a modern analysis by Todorovic (2020) which has, according to Rogers, basically the same idea at the core and therefore the same limitations.) </span></p><p><span style="font-family: arial;">The problem here is that there are many ways to describe ‘what is really out there’, and a long-standing ecological objection to illusions is that the mismatch is an artefact of the researcher having the wrong description. Gibson spent the first 4 chapters of the 1979 book describing an ecological reality for a reason! So the mismatch is often just an analysis error, not a perceptual one. A corollary to this is that illusion results are often telling us about our understanding of the relevant physics, and not about perception. </span></p><p><span style="font-family: arial;">Rogers then describes three ways to classify candidate illusion effects that he suggests covers the space without ever needing the notion of error. I think an interesting step going forward will be to try and apply these three categories to all and any illusion discussions from here on out, to see if it holds up. </span></p><p><span style="font-family: arial;">The first are <i>effects that would never fit any definition of illusion</i>. He discusses the Ames Room, where a non-rectangular room is made to appear as if it were rectangular when viewed from a particular place. Rogers identifies that it’s simply unfair to consider this an error by the visual system, because it is being meticulously fooled – it is being presented with what Rogers calls a facsimile, something designed to look a certain way. It’s not an error for the visual system to perceive a good facsimile as the other thing; how could it possibly do anything else? If it’s not an error, then it cannot sensibly be called an illusion. (<a href="http://psychsciencenotes.blogspot.com/2010/10/runeson-ames-room-and-irrelevance-of.html" target="_blank">I blogged Runeson's excellent analysis of the Ames Room</a>, which Rogers is endorsing here). </span></p><p><span style="font-family: arial;">The second are <i>effects due to how the system works</i>. His first simple example is thresholds. If there ‘really is’ a dim light present but the visual system doesn’t register it because it is below threshold, it’s not an error; it’s simply a result of the way the system works. In the same way, (my example) it’s not an error when a microphone fails to respond to light but does to sound, because that’s how microphones work. Colour metamers is another example for Rogers; we don’t consider these errors but simply what happens when a trichromatic system is operating normally. These are simply how the visual system measures the world, and calling these effects errors/illusions is just to not understand this fact. </span></p><div><span style="font-family: arial;">The third are <i>effects of using impoverished situations.</i> Rogers point is simply that it <i>has</i> to be true that the system will fail in some way when you remove information the system requires to work properly. Calling this an error is unfair. Rogers doesn't quite spell this out clearly, but I think this is a combination of the above two situations: the impoverished setting can't actually be seen any other way, because of how the perceptual system works. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">There is an additional concern here, though, that Rogers focuses on; not only is it unfair to consider these errors/illusions, they aren't good experiments. The logic of these studies is usually that systematically removing potential sources of information (typically framed as <i>cues</i>) tests whether those were, in fact, important. But this subtractive approach to perception assumes that the system is linear: that the full system and the system missing one cue only differ in that one cue. But this isn't the case; Rogers reviews some examples here, but my favourite is always <a href="https://palab.sitehost.iu.edu/Resources/Publications/2000-2009/Mon-Williams_Bingham2008.pdf" target="_blank">Mon-Williams & Bingham (2008)</a>. They tested distance perception under full cue and restricted cue conditions, and basically showed that a) people will use the non-specifying height-in-the-visual-field when that's all there is, but that b) they show no signs of using it when better information is present. Studying perception in a restricted-cue task tells you how perception works in that task, but not how it works in the typical full-cue setting. </span></div><div><span style="font-family: arial;"><br /></span></div><h3 style="text-align: left;"><span style="font-family: arial;">Conclusions</span></h3><div><span style="font-family: arial;">I am basically on board with this analysis, because it is, basically, the ecological analysis of illusions (Rogers cites Runeson (1988), one of my all time favourites, and some Gibson, but I did find the framing of the paper to be a bit Rogers-centric, as if he had come up with all this. I'm not mad, just a little disappointed :) The novel part is the extra step, saying not only are illusions not methodologically useful or the result of an uninteresting trick, but that given all this the category of illusion-as-error is probably not a real thing. That's quite a bold move and I like it. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The paper only discusses visual illusions, and so on Twitter <a href="https://twitter.com/DrGBuckingham" target="_blank">Gavin Buckingham</a> raised his standard objection of 'what about non-visual illusions like the size-weight illusion, etc?'. I'm inclined to think there's no reason Roger's analysis won't apply here (he uses examples but only to illustrate the deeper points and those will stand, I think). Worth working through, however! As I note above, if this paper is to have the impact it should have, any discussion of illusions should get filtered through the three classifications above, to see how well they hold up. </span></div><div><span style="font-family: arial;"><br /></span></div><h3 style="text-align: left;"><span style="font-family: arial;">References</span></h3><div><a href="https://palab.sitehost.iu.edu/Resources/Publications/2000-2009/Mon-Williams_Bingham2008.pdf" target="_blank"><span style="font-family: arial;">Mon-Williams, M. & Bingham, G.P. (2008). Ontological issues in distance perception: Cue use under full cue conditions cannot be inferred from use under controlled conditions. <i>Perception & Psychophysics, 70(3)</i>, 551-561.</span></a></div><div><span style="font-family: arial;"><br /></span></div><div><a href="Rogers B (2022) When is an illusion not an illusion? An alternative view of the illusion concept. Front. Hum. Neurosci. 16:957740. doi: 10.3389/fnhum.2022.957740" target="_blank"><span style="font-family: arial;">Rogers B (2022) When is an illusion not an illusion? An alternative view of the illusion concept. Front. Hum. Neurosci. 16:957740. doi: 10.3389/fnhum.2022.957740</span></a></div><div><span style="font-family: arial;"><br /></span></div><div><div><a href="https://journals.sagepub.com/doi/10.1177/0301006620962279" target="_blank"><span style="font-family: arial;">Todorovic, D. (2020). What are visual illusions? <i>Perception 49, 1128–1199</i>. doi: 10.1177/0301006620962279</span></a></div></div><div><br /></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-9869251812447287212022-06-30T12:25:00.001+01:002022-06-30T12:25:31.211+01:00Lecture 8: The Space Enigmas I: Berkeley (Turvey, 2019, Lectures on Perception)<p><span style="font-family: arial;">One of the big problems that emerges from all <a href="http://psychsciencenotes.blogspot.com/2020/04/reading-group-turvey-2019-lectures-on.html" target="_blank">the proceeding discussions of perception</a> is how we are able to perceive <i>space</i>. Space has been considered as a mathematical concept (in terms of Euclidean geometry), as a psychological concept (a construction of the mind) but never really as a biological, ecological concept. This first chapter about space perception is focused on one mathematical conception, some of it's implications, and one specific attempt to deal with those implications (<a href="https://www.earlymoderntexts.com/assets/pdfs/berkeley1709.pdf" target="_blank">Berkeley's <i>New Theory of Vision</i></a>). </span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><p><span style="font-family: arial;">Space perception has to work with whatever space is, so a theory of space perception has to work with whatever your theory of space is. Mathematically, space has been considered as Euclidean - continuous, isotropic (no preferred direction), homogeneous, and infinite. (ADW note: there are lots of kind of geometrical systems, and they vary in how many of these kinds of symmetries they require. Affine geometries give up isotropism, for example, while topology relaxes all symmetries. So a mathematical conception of space for perception to work with does not have to be Euclidean, it just mostly has been considered as such. Identifying the correct geometry for perception is actually an empirical question, e.g. <a href="https://www.asc.ohio-state.edu/todd.44/group/Geometry%20of%20Visual%20Space/Todd,%20Oomes,%20Koenderink%20&%20Kappers,%202001.pdf" target="_blank">Todd et al, 2001</a>, and there are plenty of options that may be weird enough.)</span></p><p><span style="font-family: arial;">The Euclidean conception of space poses some problems for visual space perception to solve. In essence, vision inhabits Flatland, and faces all the challenges of coming to grips with Spaceland detailed in <a href="https://en.wikipedia.org/wiki/Flatland" target="_blank">Abbott's famous book</a>. In the first story, a 2D square living in Flatland tries but fails to convince a 1D inhabitant of Lineland that the 2nd dimension exists; in the next two stories. a visitor from 3D Spaceland tries to convince the square that the 3rd dimension exists, which only happens with the 'miracle' of the square being lifted into Spaceland. This convinces the square, who goes on to reason that there may be a 4th dimension; he cannot convince the sphere from Spaceland that this reasoning works, however. </span></p><p><span style="font-family: arial;">The relevant moral of the story is that there is nothing essential (inherent) to Flatland than can provide access to Spaceland, so you cannot get from one to the other using logic or analysis on Flatland data. The implication is that, in order to perceive space, Flatland vision needs help from a Spaceland inhabitant, which might come in one of two guises:</span></p><p></p><ol style="text-align: left;"><li><span style="font-family: arial;">You could enrich 2D vision with a source of knowledge about the rules of Spaceland geometry. This is the Kantian style solution, but is an example of an <i><a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-6-cartesian-program-turvey-2019.html" target="_blank">unrepayable loan of intelligence</a></i></span></li><li><span style="font-family: arial;">You could ground 2D vision with a perceptual system that necessarily inhabits 3D Spaceland, such as <i>touch</i>: this is Berkeley's solution, because he was worried about the loan, and which Turvey spends the rest of the Lecture on. </span></li></ol><div><span style="font-family: arial;">(A third possible solution (binocular vision) gets ruled out by Turvey, who points out that a) most animals don't have overlapping fields of view and b) even amongst those that do, like humans, monocular vision persists as being successfully spatial.)</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Berkeley actually gets rid of a lot of 2D cues; for him, what strikes the retina are sets of points of light with varying origins from a 3D environment. Vision consists of a bunch of contingent facts about these bundles; this set is not a rich set and certainly cannot support getting to Spaceland via computation on these sensations. In fact, for Berkeley, these facts operate exactly like a language: a series of arbitrary signs and symbols that don't work by resembling what they represent, but by virtue of being structured by a reasonably coherent set of reliable, regular relations. Just like language (or any symbol system) vision still needs to be grounded in order to be about anything. For Berkeley, that grounding comes from being associated with the sense of touch. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Touch has long been considered by mechanist-based science to be a privileged system: it is about mechanical contact which can support local causation, so unlike vision it could operate as a direct perception system, no mediation required. Empirical work Turvey reviews does support the basic idea that the activity of the body (e.g. locomotion) can serve as an invariant source of information about distance and speed despite variable activity - blind walking & running from A to B can both support successful blind walking from B to A, for example. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This seems to support the Berkeley idea, but Turvey very quickly shuts this down. While bodily movement does seem to serve as an invariant source of spatial information, it does so despite variable activity of the relevant sensors - this is the problem with vision that touch/haptics is supposed to be solving! Turvey notes here that Berkeley gives the final word to God, which is a hint he didn't really have a solid account. </span></div><div><span style="font-family: arial;"><br /></span></div><h3 style="text-align: left;"><span style="font-family: arial;"><span style="font-size: 18.72px;">Reflections</span></span></h3><div><span style="font-family: arial;">One thing that occurred to me is that this last point is quite important: it suggests that touch is not, in fact, a privileged, 'incorrigible' perception system because it has all the same issues as vision. So the common intuition that mechanical contact with things is somehow special and crucial to grounding perception is wrong. Berkeley was right, though, that perception needs to sit on a direct system at some point, but he didn't have the conceptual tools necessary to make it work, even for touch. This is going to be Gibson's critical contribution: providing a theory of specifying information that will enable direct perception to be a coherent option and thus make perception itself possible.</span></div><p></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-74386874808426373262021-12-07T14:38:00.001+00:002021-12-07T14:38:07.178+00:00The medium for direct perception (Notes on Van Dijk & Kiverstein, 2020)<span style="font-family: arial;">The ecological approach has hit a point in its history where it has become interested in expanding its scope, to go beyond the real time coordination and control of action. There are many challenges from non-ecological cognitive science about how to tackle representation-hungry problems, and how to conceptualise things like language, social behaviour, and what the brain is up to. I am all on board with this move - it was important we waited till we were ready, but since Gibson died in 1979, the empirical programme on the basics has matured into a solid foundation and we have a lot of developed or adopted a lot of things that will come in useful. </span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">However, if we are going to do it, I want us to do it with rigour and care and with reference to all our hard-won successes. My current view is that our best path lies in looking at the ways we are able to use ecological information, and grounding our explanations and hypotheses at this scale. Sabrina first developed this idea in a paper about how to think about what language is (<a href="https://cognitioninaction.files.wordpress.com/2015/08/golonka-2015-laws-and-conventions-in-language-related-behaviors.pdf" target="_blank">Golonka, 2015</a>). The big take-home from that paper is the analytic distinction between <i>law-based</i> use and <i>convention-based</i> use of information, and the first draft of the consequences of this distinction. We built on this when we started thinking about brains (<a href="https://cognitioninaction.files.wordpress.com/2019/10/7.-golonka-wilson-ecological-representations.pdf" target="_blank">Golonka & Wilson, 2019</a>), and I'm currently thinking about the next step along this path. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I'm pretty sure that a big chunk of the work I need to do is explicitly connecting this distinction up to work on the skilled intentionality framework, and the notion of our variable levels of grip on the field of affordances. This work is wrong about affordances (they aren't relations) but other than that, there's a ton of really great work about how intentionality isn't an all-or-nothing thing, and a lot of really useful vocabulary and framing development that I think will be useful for articulating these ideas. I don't like re-inventing wheels, so I'm skilling up on this literature as I develop ideas for a paper. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This post is about a recent paper (<a href="https://link.springer.com/article/10.1007/s11229-020-02578-3" target="_blank">Van Dijk & Kiverstein, 2020</a>) that is explicitly about developing a usage-based notion of information. To unbury the lede, I think this is a robust piece of work with solid internal logic, but I think like all this enactivist style work, it ends up in a place that cannot support a how-actually explanation of behaviour - this particular usage-based theory of information and the things that come with it aren't the framework that will let the ecological approach expand its scope. This is ok, at one level, because I don't think mechanisms are the goal of enactivist analyses. But it's a worry at another level, because I want an ecological theory of direct perception that can actually explain behaviours and this isn't going to cut it. <span><br /></span></span></div><span style="font-family: arial;"><a name='more'></a></span><div><span style="font-family: arial;">The goal of this paper is to develop a usage-based account of ecological information that fits within the radical empiricist tradition (which Gibson was firmly a part of). The problem they are trying to solve is the nature of the 'medium' for direct perception, which they argue has been developed by Gibson in two ways, each incompatible with each other and one incompatible with radical empiricism. The overall work of the paper is to develop a single view of the medium that can then be applied to explain how all kinds of patterns can come to serve as information when used. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Two Views of the Medium for Direct Perception</span></h3><div><span style="font-family: arial;">Gibson developed his idea of the medium from Heider's account, which distinguished between 'thing and medium'. A thing is something that is relatively solid, resists being deformed, and persists in its form over time. A medium is something that is relatively fluid, and is capable of being shaped and formed by what it interacts with. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Gibson's first sense of the medium is as a medium-for-perception. Take the optic array. This can be a medium for perception because the forms that it takes can be lawfully established by the things it interacts with, and because it is ambient to both the organism and its environment. The form of the optic array changes over time in a way that the form of things cannot (optic <i>flow</i>) and those changes are available to be detected. This sense of the medium, then, is as a 'ready made' structure: the forms of the various perceptual arrays are created ahead of being detected, and are there to be discovered. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Gibson's second sense of the medium is as a medium-for-action. Some parts of the physical world resist actions taking place in or through them; they are substances ('things') or surfaces (where substances and media meet). Action is possible, however, because there are media that do not completely resist organisms, things like air and water. Actions can flow through a medium the way information flows through a medium. How these work as media is not ready-made, however; they <i>become</i> media in negotiation with an organism's actions. For example, water can serve as substance for something that cannot swim, but can come to serve as a medium if that organism learns to swim. The medium now is not ready-made, but defined relative to the activity and the organism. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Van Dijk & Kiverstein make two moves at this point. First, they note that these two senses are mutually incompatible, and that only the usage-based sense of the medium aligns with radical empiricism. Second, they then go on to develop a usage-based account of the medium for perception, in the form of a usage-based account of information, in order to get all senses of the medium under one, appropriate notion; this is the rest of the paper. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">(I will note at this point that I'm not convinced there is a problem with these two senses; it may simply be the case that the medium-for-information and the medium-for-action work differently, and that this will be ok with clear accounting of how the word is ever being used. However, making perception and action effectively the same kinds of things that actually exist as perception-action is a solid ecological move, so this may end up being important). </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Use Is What Makes Structure Information</span></h3><div><span style="font-family: arial;">The current ecological approach to information is that it consists of higher-order patterns in ambient energy arrays, that these patterns are the law-based projection of world dynamics into those energy arrays, and that the law-based projection allows the patterns to be specific (map 1:1) to those dynamics. Specification, considered this way, makes it so that the pattern is about the dynamics; it is information, and this comes ready-made for the organism. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Van Dijk & Kiverstein note that this is the problematic sense of the medium. They are happy that there are indeed higher-order patterns in ambient arrays, but they deny that these can be considered as <i>information </i>until they are used by an organism. They will make this distinction throughout the paper by talking about patterns vs information-relations. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Specification as a Process</span></h3><div><span style="font-family: arial;">They now need to deal with a consequences of this move. In the standard approach, what makes it possible to say a pattern is <i>about </i>something is <i>specification</i>. The standard, Turvey notion is that specification lives between the surface and the pattern in the medium and is underwritten by the law-based projection of the former into the latter. This makes patterns in the medium-for-perception <i>information </i>about things ahead of use, and is a key part of the Turvey approach to making direct perception possible; specification is required for information to allow but not mediate perception of the world. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Van Dijk & Kiverstein want to jettison all this, so they need an equivalent way to get aboutness into their story, without losing the directness of perception and while maintaining their radical empiricist understanding of perception as a process. Their claim is that specification, the thing that makes something information about something, is underwritten not by a law but by the activity of the organism. Specification is a process, and as such is not all-or-nothing; there can be varying amounts of specification and it gets better as the activity of the organism becomes more and more successfully coordinated with the demands of the environment. They claim that this notion supports everything Turvey-specification supports, gets it to align with radical empiricism, and naturally allows for a discussion of social experience.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">(This is a big, big claim, which they defend in the next section. I will note at this point that I think calling all this 'specification' is just an error; this just isn't what that word means, and the resulting work is pointed in the wrong direction.)</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">A couple of consequences: first, there is no ontological distinction between specifying and non-specifying information in this account; because specification is negotiated into being as the organism-environment fit improves with actions, these aren't different in kind. Second, issues such as normativity (was the action appropriate to the circumstances?) are defined pragmatically; if it works, it was good, and if something worked better, it was better. Van Dijk & Kiverstein pitch both of these as advantages, part of getting all the ecological ducks into a radical empiricist row. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">How All This Enables Direct Perception</span></h3><div><span style="font-family: arial;">In this view, direct perception is not possible because of pre-given specifying, meaningful information about the world. Instead, it has to be possible as the result of skilled activity;</span></div><div><blockquote><span style="font-family: arial;">Specification...is an outcome that takes form over the course of ongoing situated activity. Perceiving is the process of achieving an information-relation. In this ongoing process, the active animal explores for and uses the patterning available in the ambient array so as to establish an information-relation. The animal achieves coordination by using ambient patterns in the activities of '<i>listening, touching, smelling, tasting, and looking'</i></span></blockquote><span style="font-family: arial;"><i></i></span></div><div><span style="font-family: arial;">Then how you've done this in the past and how you are doing it now maintain and develop the specification-in-action. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I have a lot of questions. When this is still in the early stages of developing, is there still specification because the process is going on? Or is there not yet specification, because it's still early days and the animal-organism fit isn't optimal yet? If the latter, is the perception that's happening still direct? How? And how does all this get going? How do I get into this perception-action loop of developing specification in action if none of the available patterns mean anything? Which patterns do I try to use, and why? Why do I change which patterns I'm using, and how? I'm sure there are answers possible with this framework, so I don't mean these as slam dunk problems; but I would like some answers, my answers drag this right back into Turvey-land, and this paper does what most of these accounts do and starts with the up-and-running trained system. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">The Social World as a Medium</span></h3><div><span style="font-family: arial;">Van Dijk & Withagen notes that the standard view of information as specifying variables struggles to explain how the social world can constrain our behaviour (this bit made me wonder if it's actually true and <a href="https://twitter.com/PsychScientists/status/1462748252403650562?s=20" target="_blank">write this Twitter thread in which I draft the idea that it's not actually a problem</a>). The worry is that social practices aren't obviously specified in ambient arrays, but from their account this is ok, if we treat sociomaterial practices as a medium in which human activities unfold. (One issue here; we don't perceive a medium, we perceive in them. So how does a medium per se shape our behaviour?)</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The idea here is that the constraints of the social world are what make it so that the material things that we engage with have social meaning. For example, when I interact with an alarm clock in order to wake up at a certain time so as to get to work on time, what makes that interaction mean all that is the socio-cultural medium in which alarm clocks play certain roles and I have deadlines that align with the activities of other people, who are all also working within the same medium. When we do this right, the medium acts as a medium and we all arrive in the same place at the same time for our meeting. When we do this wrong, these things all create resistance and trouble; the analogy is how water shifts from a medium for locomotion to a substance that impedes locomotion and vice versa. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Effectively, the meaning of everything is still always being negotiated into existence, and in socio-cultural contexts, that negotiation occurs in the medium of the social world and it's conventions. This is again supposed to be the same kind of thing as negotiating the meaning of higher-order patterns in ambient arrays, which occurs in the medium for perception and it's laws and conventions. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">All of This is Direct Perception</span></h3><div><span style="font-family: arial;">The final section is intended to justify the claim that this usage notion of information and the medium still allows direct perception. The argument takes the form of an extended analysis of enacting a given affordance that is initially distant in space and time (the affordance of catching a train; whether this is actually an affordance is another question, but the argument is premised on the idea that everything a person is doing as they work towards catching the train is organised with respect to catching the train). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Direct perception is what emerges over the activities, each of which is done as part of catching the train and done in a way to reflect that goal (e.g. you walk to the train station, and you do that walk in a way to get you there on time for the train). Everything unfolds over time and converges on the actual act of stepping onto the train; specification of the train catching affordance is achieved, not relied on. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Effectively, this is still just an interesting way to describe the phenomenology of doing things to catch a train, but it still doesn't really explain how it all works. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">My Thoughts</span></h3><div><span style="font-family: arial;">I think this paper is well-motivated. It is attempting to do more nitty-gritty work on how it can be possible that behaviour can be organised with respect to a spatially and temporally distant goal (such as catching a train) without invoking mental models/representations. It tries to do this by deconstructing some key parts of the ecological analysis of direct perception, then reconstructing them within a single, more coherent, radical empiricist framework. The authors identify that direct perception needs a concept of the medium (the context within which meaning is established), and opt for one in which meaning is never pre-given but only ever negotiated by the activity of the organism in the medium. This radical empiricists concept of the medium can be applied to mediums for perception (such as the optic array), mediums for action (such as water) and mediums for culture. Patterns in any of these media can come to mean things via skilled action, and so behaviour in all these media is the same kind of thing. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Like all this enactivist style work, we end up with a nice clear description of what's going on, but no clear reason to accept this description over any other. Take the discussion of catching the train - the word 'anticipation' is doing a lot of work that could readily be done by 'prediction'. In order to create an argument in favour of the former over the latter, we're owed an account of how this actually works, how this actually plays out. This paper doesn't provide that. This is ok - perhaps a future paper will do it using this one as a guide. In one sense, this paper is decomposing a theory that can generate explanations of behaviour, reconceptualising some of the resulting parts and processes while working to keep all the necessary properties, and then resituating the new parts into a theory they claim can do the same work as the old one, plus some new work, all within a single coherent approach. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">But I have reason to think trying to create a how-actually explanation of this framework will fall over. Take specification. Van Dijk & Kiverstein clearly recognise that specification is required somewhere to support the directness of perception, but their analysis doesn't let them localise specification to the lawful relation between a surface and a pattern in an ambient array. They have to localise it elsewhere in the system, specifically in the process of perception-action. But this doesn't work; that process isn't up to implementing specification and you end up with 'varying amounts of specification', which is just not what specification means and isn't the kind of thing that can do what specification needs to be able to do in order to support direct perception. Ending up with parts and processes that can't implement properties required of the system is a hint that the scheme guiding the analytic decomposition of the system is flawed. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This paper is a solid piece of work; the internal logic holds and the goal is noble. But it ends up with the same problem created by rejecting traditional notions of affordances (as dispositions) and information (as patterns in ambient arrays that specify those affordances) - it is entirely unclear how to develop this how-possibly verbal description of behaviour into a how-actually explanation of behaviour and there are many reasons to think it won't work. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-79851239389568282952021-11-18T14:33:00.004+00:002022-03-01T10:08:35.728+00:00Is Indirect Perception Plausible?<p><span style="font-family: arial;">There are two basic ways perception might work to let us experience the world in behaviourally relevant ways. Direct perception is the idea that perception only requires two components; the environment and the organism. Indirect perception is the idea that perception requires at least three components; the environment, the organism, and at least one other component that mediates between the organism and the environment. Over the last few posts, I've been working through the specifics of the ecological approach to making direct perception plausible, because this is a question I often get (usually in the form of 'I don't see how this could work in this case'). Regardless of whether or not it's correct, we can show that we have all the pieces needed to make direct perception work in principle, and the empirical programme is about seeing if it works in practice. What about indirect perception?</span></p><p><span style="font-family: arial;"><a href="https://twitter.com/PsychScientists/status/1460575103872376838?s=20" target="_blank">I asked this question on Twitter</a>, and one interesting thing I noticed was just how little sense the question seems to make to people these days. Responses fell into roughly two categories: 'I don't see how we can do with it in this case', and 'brains do stuff, so...', neither of which answer the question. Even if some form of indirect perception is required in those cases (which is, of course, still up for grabs) we're still owed an account of how this might work, at least in principle and then later in practice. </span></p><p><span style="font-family: arial;">People used to know this. The most recent indirect perception hypothesis is that the key mediator is a mental representation, understood as a computational, information processing system implementing some form of inference that combines sensory data and information stored in memory to create a model of the world that represents the system's best guess about what is out there and how to behave successfully with respect to what's out there. This hypothesis didn't come out of nowhere; the development of the computer and the theory of information that allows them to work turned out to provide the pieces required to create a formal account of representations that stood a chance of living up to the challenge of explaining perception. Cognitive scientists therefore leaned heavily into the details of these pieces as they worked very hard, from the late 1950s on, to make indirect perception implemented this way plausible. </span></p><p><span style="font-family: arial;">The exact details of the process have, of course, changed and evolved with empirical data and developments in computational theory. For example, while all the accounts have to do inference that combine sources of information into a best guess, there are a variety of ways of doing inference, some better than others. Probably the best way to do inference is via Bayesian methods, and so most modern theories propose that indirect perception combines sources of information this way so as to be optimal. </span></p><p><span style="font-family: arial;">Before these inferential methods can even be brought into play, however, there remain two related and big unanswered challenges that need to be addressed. The first one is the grounding problem; how do representations get the content they need so as to combine sources of information in a way that works? It's all very well describing the inferential process of the fully formed system, but how do you build one in the first place? The second is the 'which representation?' problem; of all the different sources of information the system has to combine, how does it know which information to bring together for a given task? These reflect a circular problem indirect theories create for themselves. If perception is not good enough to be direct, and thus requires representational support, where do those representations come from? In order for a theory of indirect perception to be plausible, these must be addressed (analogous to how in order for a theory of direct perception to be plausible, questions like <a href="http://psychsciencenotes.blogspot.com/2021/10/are-affordances-plausible-updating-some.html" target="_blank">'can the physical world present itself in behaviourally relevant ways?'</a> had to be addressed). </span></p><p><span style="font-family: arial;">I am not going to address these challenges to indirect theories, because it isn't my job. But they are legitimate questions that people have mostly stopped asking. Debates about the form and content of representations were prominent and explicit right up until the end of the 1990s, and then it all just seemed to stop. Interface theory, for all it's problems, at least got back into the fight and tackled the grounding problem (<a href="https://www.tandfonline.com/doi/pdf/10.1080/09515089.2021.1937592" target="_blank">unsuccessfully, I've argued</a>, but it was a solid swing and at least Hoffman recognised he owed us an account). Mark Bickhard's work is probably the only currently active research programme explicitly working out the details, but I don't know many scientists who even know who he is, and a lot of his work is about mapping out the rules of living up to the challenge, versus actually solving the problem. </span></p><p><span style="font-family: arial;">Until these foundational issues are addressed and answered, whether indirect perception is plausible remains unclear, and no matter how sophisticated your inferential machinery is (looking at you, free energy principle) it can't help until you explain how it came to be organised that way in the first place. Even if the ecological theory of direct perception doesn't hold up, representational theories of indirect perception are not viable options if they cannot be shown to be plausible. </span></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-74339337875815271992021-11-03T16:49:00.002+00:002021-11-03T16:49:19.069+00:00Is Direct Perception Plausible? Ecological Information<p><span style="font-family: arial;">We're in the home stretch of working through how direct perception is, at least, an option, and how the ecological approach in particular attempts to make it work. We've talked about <a href="http://psychsciencenotes.blogspot.com/2021/10/what-does-it-mean-for-perception-to-be.html" target="_blank">what direct vs indirect means</a>, <a href="http://psychsciencenotes.blogspot.com/2021/10/are-affordances-plausible-updating-some.html" target="_blank">the kinds of properties direct perception needs to be out there in order to work</a>, and <a href="http://psychsciencenotes.blogspot.com/2021/11/is-direct-perception-plausible-case-for.html" target="_blank">affordances/effectivities-as-dispositions as the specific properties ecological psychology claims are out there and fit the bill</a>. </span></p><p><span style="font-family: arial;">I ended that last post by highlighting ecological psychology had one last thing to do in order to be plausible, and that is to have a way to bring affordances and effectivities together into a kind of contact that allows them to work together. That contact can't be mechanical, or simply physical proximity, because almost all of the things we perceive and act with respect to are not in that kind of contact. The ecological solution is <i>informational contact</i>, and so this post will build on the pieces I've assembled to identify what kind of thing ecological information has to be in order to work. </span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h3 style="text-align: left;"><span style="font-family: arial;">The Objects of Perception</span></h3><div><span style="font-family: arial;">This phrase refers to the question of what it is about the world that perceptual systems are initially, preferentially tuned into. Ecologically, the objects of perception are possibilities called dispositions, specifically the possibilities for action we call affordances (dispositions of the environment) and effectivities (dispositions of the organism). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We established last time that dispositions are higher order properties that are formed by how simpler material properties are arranged in space with respect to one another. Two posts ago, we established that higher order properties are allowed to be the kinds of things that are measured directly, if the measurement device is of the right kind; recall the example of the polar planimeter. Higher order properties are legitimate potential objects of perception, so that's all good. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Information for the Objects of Perception</span></h3><div><span style="font-family: arial;">Indirect theories of perception get to these kinds of higher order properties by detecting the lower order pieces and somehow implementing the operation required to place them into the correct relation. This is what it means when indirect theories talk about constructing a mental model of the world. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Direct theories aren't allowed to do this, so we need there to be information variables available to perceptual systems that are already <i>about </i>our higher order affordance/effectivity properties. Gibson pointed this out when he said</span></div><div><div></div><blockquote><div><span style="font-family: arial;">The central question for the theory of affordances is not whether they exist and are real but whether information is available in ambient light for perceiving them.</span></div><div style="text-align: right;"><span style="font-family: arial;">Gibson, 1979, pg 132</span></div></blockquote><div></div></div><div><span style="font-family: arial;">We have some constraints to work within. In order for there to be information about a higher-order property like an affordance/effectivity, that information itself has to be a higher-order variable that can be measured in and of itself. We can point to the polar planimeter again to say that this isn't an immediate dead-end. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">That information variable also has to be about the affordance/effectivity property; this is harder.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Is Information a Copy of an Affordance Property?</span></h3><div><span style="font-family: arial;">One option could be that the information is a copy of the affordance/effectivity property, just implemented in a perceptual media rather than an object. That would make it trivially "about" the affordance/effectivity, and detecting information that was a copy of an affordance would bring that copy into physical proximity to the effectivities. As a general rule, this idea is generally considered unworkable (<a href="http://psychsciencenotes.blogspot.com/2020/06/lecture-4-simulative-projective-and.html" target="_blank">Turvey, 2019, Lecture 4</a> gets into it), and would also mean perception was not directly of affordances; the object of perception would be an information variable. In the particular case of ecological psychology, this copy hypothesis really fails because of how information is created and a key limitation of the media information is created in. Bingham (1988) analysed this idea as the 'perceptual bottleneck'.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Affordances and effectivities are properties of the physical environment. At the ecological scale, a complete description of a physical property requires a dynamical description; specifically, one that uses units of time, position, the temporal derivatives of position like velocity, and mass. We need these units to characterise where objects are and how they respond to forces trying to change that state. However, we can generate complete descriptions of informational media such as the optic array with only kinematic descriptions; time, position, the derivatives, but no mass. This is because at the ecological scale, these media are relatively low-energy; they don't cause things to change state via the transmission of forces. You don't try to catch a ball because the light pushed your hand. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Kinematic information can therefore not be a copy of a dynamical property, because it is missing a dimension. We need another way to build information variables in low energy perceptual media such that these higher-order patterns are about the affordance property. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Affordances Entail Information, Information Entails Affordances</span></h3><div><span style="font-family: arial;">In order for one thing to be about another thing without any additional factors being included, the things must <i>entail </i>one another. An entailment is what we call it when you have a set up that propagates truth. So for example, 'X entails Y' means that the presence of X means Y is also present. If you also have 'Y entails X' then the presence of Y means X is also present. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The whole package required to these two things to be intrinsically about each other is "X entails Y and Y entails X"; this circular loop is another example of impredicativity, and of the kind of loopiness we need in general in direct perception. The ecological approach needs to be able to say 'affordances X entails information Y, <i>and </i>information Y entails affordance X'; this way, if the affordance is present you get the information, and if you have the information you also effectively have the affordance. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Entailments aren't mediating states; if you're doing a little 'if X then Y' analysis in your head, you're doing something indirect. In order for affordances to entail information, and for information to entail affordances, they need to be related in a way that implements the entailment. The best way is via a law. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Affordances & Effectivities Lawfully Create Information</span></h3><div><span style="font-family: arial;">Affordances and effectivities are real physical properties (just weird dispositional ones). Real physical properties are the kind of thing that light can bounce off, or that can create waves in the atmosphere. These low energy perceptual media can bear the consequences of that interaction; one way is how interactions can filter the available frequencies so that an object appears red, for example. The medium (e.g. photons) hits the affordance/effectivity, bounces off, and is been transformed into a structured array. Each affordance/effectivity property structures media differently, but because of the laws of physics, a given affordance property always structures media the same way.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The net result is that when an affordance is present in a perceptual media, it creates one and only one higher order pattern in that media. We call this higher order pattern an information variable. The affordance entails that information variable, and the variable entails the affordance, all because of the law that governs how the one creates the other in a perceptual media. We've established that the information variable cannot be a copy of the affordance property, but because there is a law-guaranteed 1:1 relationship between them (each affordance creates one variable), we <i>can</i> say that the information variable <i>specifies</i> the affordance. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Connecting the Pieces</span></h3><div><span style="font-family: arial;">Now we have everything we need. There are higher-order behaviourally relevant properties in the environment, specifically a subset of dispositions we call affordances. There are also higher-order environmentally relevant properties in the organism, specifically a subset of dispositions we call effectivities. These come in complementary pairs; an affordance implies an effectivity, and vice versa (they don't entail each other though; not every affordance can be currently effected, which is why the Olympics is still exciting). </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">These dispositional properties are dynamical properties. They can interact with a variety of low-energy perceptual media (light, the atmosphere, etc). When they do, that interaction is governed by ecological scale laws of physics that mean, for example, that a given affordance will produce one and only one pattern in the media, and it will always produce that pattern. The laws underpin an impredicative entailment; the affordance entails the information, and the information entails the affordance. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">If an organism's perceptual system reacts to such an information variable (and not the parts), then this connects an affordance disposition to an effectivity disposition and that disposition can become an actuality; a behaviour. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Conclusion</span></h3><div><span style="font-family: arial;">Direct perception is plausible. Suitably weird physical properties are legitimate options at a variety of physical scales, up to dispositions at the ecological scale. These properties can interact with the low-energy perceptual media that organisms and their environments are embedded in, and the way they interact is governed by laws of physics. These laws mean that a given affordance property will always produce one and only one pattern in that media; that pattern is not identical to the affordance but the 1:1 mapping means that it does specify the affordance. Detect that information, and you perceive the affordance, no additional mediating steps required. Perception can be direct.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">So it's possible. But we've had to a do a ton of work to get here, in which we've gone hunting for weirder physical properties and had to learn a bunch of intimidating words. This is all ok; there was no reason understanding perception had to be easy. It does mean I've run the risk of not closing all the gaps in the story here, just because I might have missed something. Let me know if you spot any. But to emphasise, direct perception is not just a tweak on indirect perception - it is different, root and branch. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">To finish, let me paraphrase the Gibson quote from above: "The central question for the ecological theory of direct perception is not whether it's technically possible but whether affordances and information are actually being used to implement direct perception". At this point, allow me to introduce you to 40+ years of research and encourage you to dive in! That said, I do have one more post in mind for this series; fronting up to the challenges of trying to do science on a theory of direct perception and how the ecological approach varies quite a lot in how well we are managing. This shit is, it turns out, really bloody hard. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">But that's ok too. That just makes it fun - it would be a bummer if Gibson had figured it all out by 1979 :)</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-47940717341611141442021-11-02T16:22:00.005+00:002021-11-02T16:22:31.248+00:00Is Direct Perception Plausible? The Case for Affordances-as-Dispositions<p><span style="font-family: arial;">The first post in this series <a href="http://psychsciencenotes.blogspot.com/2021/10/what-does-it-mean-for-perception-to-be.html" target="_blank">laid out the rules for what makes a theory of perception direct or indirect</a>. In order to avoid having to require organisms to somehow figure out behaviourally relevant properties (indirect perception), direct perception requires that such properties are out there already, ready and able to be detected. Such properties are a bit weird - by definition, they must be properties of objects that include some reference to the organism doing the detecting (specifically, they are impredicative properties). </span></p><p><span style="font-family: arial;">The second post <a href="http://psychsciencenotes.blogspot.com/2021/10/are-affordances-plausible-updating-some.html" target="_blank">laid out some reasons to think that such circularly defined properties are legitimate options for the physical world</a>. Properties like this are required by the mathematics of the quantum scale; and so, while a bit weird, are possible. Organisms don't work with quantum properties, though, so if they only happen at that scale, this doesn't help a theory of direct perception. The key to making impredicative properties work is measurement. At the quantum scale, all properties are uncertain until measurement collapses the uncertainty of the thing measured. At the ecological scale, properties aren't uncertain in this way, but which properties are 'primary' still depends on the measurement device; to a polar planimeter, for example, the higher order property 'area' is simple and lower order properties such as 'length' must be figured out. The reverse is true of a ruler. </span></p><p><span style="font-family: arial;">We have a set of pieces now. Impredicativity requires measurement to affect which properties are immediately available to the measurer, this is possible at the ecological scale, and with the right measurement device higher-order properties can be immediately available. This establishes an in-principle case in favour of direct perception as an option. Next, if we can identify the higher-order form of behaviourally relevant properties, we will identify what kind of measurement device is required for these to be directly available. </span></p><p><span style="font-family: arial;">The main ecological hypothesis is that higher-order behaviourally relevant properties, which we call affordances, take the form of dispositional properties. This post will walk through how this works and how it fulfils what's required to support direct perception.</span></p><p><span style="font-family: arial;">(As many of you know, not everyone is happy with the dispositional ontology, and propose instead that affordances are relations. I am not going to lay out the parallel case for how this ontology fulfils the requirements I have been laying out, for the simple reason that I do not think it can do any such thing and I have yet to see any successful attempt to make it work. <a href="https://psyarxiv.com/5qb92/">I talk about this in this paper</a>, currently still living in review limbo.)</span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h3 style="text-align: left;"><span style="font-family: arial;">Dispositions</span></h3><div><span style="font-family: arial;">Dispositions are higher order properties of physical objects that define something that the object is disposed to do, under the right conditions; something that is <i>possible</i>, although not currently happening. They are composed of lower order properties placed in some relation to each other, but it's the higher order disposition this arrangement makes that then acts as a property that can interact with other properties. In particular, the higher-order disposition defines a complementary higher-order disposition; when both are present, the dispositions manifest (i.e. become actual, rather than simply possible).</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Take the disposition of salt to be dissolved in water. This disposition is constituted by the fact that salt is made of ions arranged in a strong but breakable lattice. If the positive ions are pulled one way and the negative ions are pulled in another, the lattice will come apart; the salt will dissolve. Notice that this dispositional property is inherently complemented by another dispositional property, namely the ability to pull the charged ions in different directions. Water has this property; water molecules have positive and negative ends and can pull hard enough to disrupt the lattice.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Dispositions are a little weird, but the weirdness is exactly the kind of weirdness we've been looking for. Specifically, they have the kind of circularity that is the hallmark of impredicativity. The disposition of salt to be dissolved is implemented by the material properties of the salt, but it is a property that is <i>at the same time</i> defined by the disposition of water to dissolve salt. You cannot simply predicate 'will dissolve' to salt, because salt will only dissolve under certain specific circumstances. The full physical account of this property requires reference to the complementary dispositional property of a solvent, and the conditions required to bring them together in a way that allows the disposition to become actual. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Affordances and Dispositions</span></h3><div><div style="text-align: left;"></div><blockquote><div style="text-align: left;"><span style="font-family: arial;">The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill....an affordance is neither an objective property nor a subjective property; or it is both if you like. An affordance cuts across the dichotomy of subjective-objective and helps us to understand its inadequacy. It is equally a fact of the environment and a fact of behavior. It is both physical and psychical, yet neither. An affordance points both ways, to the environment and to the observer.</span></div><div style="text-align: right;"><span style="font-family: arial;">Gibson, 1979, pg. 119</span></div></blockquote><div></div></div><div><span style="font-family: arial;">Gibson's definition gets accused of being vague and annoying, but immediately resonates with the facts about dispositions. Dispositions don't sit comfortably within the objective-subjective analysis of properties, but are perfectly real, suggesting that analysis is inadequate. They are equally facts of the thing that has the disposition, <i>and </i>the thing that has the complementary dispostion. Dispositions also point both ways - if you characterise the disposition of salt to dissolve, you also characterise what is required of a solvent to make that disposition occur (this is how new solvents get invented!).</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Affordances and Effectivities as Dispositions</span></h3><div><span style="font-family: arial;">So we seem to have a good match; dispositions are perfectly real properties, and they implement everything required to make them the kind of circularly defined things behaviourally relevant properties need to be. We have a viable ontology for things like affordances to be real. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The specific ecological form of the dispositional analysis goes as follows. Objects in the physical environment are disposed to be acted on by an organism in some ways and not others. Those dispositions are higher-order properties of the object constituted by a particular arrangement of currently present material properties of the object, and we call these <i>affordances</i>. At the same time, organisms are disposed to be able to act on objects in some ways and not others. These dispositions are higher-order properties of the organism, constituted by a particular arrangement of currently present material properties of the organism, and we call these <i>effectivities</i>. Affordances and effectivities are just complementary dispositions, but we name them differently to keep track of them in our analysis, because it will matter which one we are talking about at any given time. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">When an organism encounters an object, one way it can do so is by bringing its effectivities into suitable contact with the object affordances. When it encounters an object this way, the affordance picked out by the effectivity is the 'simple' property, in the way area is simple to a planimeter and the way the disposition of salt to dissolve is simple to water. The effectivity and the affordance come together and their dispositions (to act on, and be acted on) are made actual; we have a behaviour. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Summary</span></h3><div><span style="font-family: arial;">We are doing well in our bid to show that direct perception is, at least plausible, but we aren't all the way there yet. So far, we've shown that higher-order, impredicative properties are allowed, and that at the ecological scale the ones we need are called dispositions. While weird, they are perfectly legitimate properties and they are weird in just the ways we need. Affordances and effectivities, considered as dispositions, give us an ontology for describing the world in behaviourally relevant terms that can present themselves as properties, with no need to figure them out. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We have one last hurdle to cross, however, and it's a doozy. I slid by it above - affordance dispositions and effectivity dispositions are legitimate ways to describe world and organism properties, but they only go from 'possibilities' to 'actualities' if the two can be brought 'into suitable contact' with each other. For things like salt and water, that suitable contact is physical contact; a glass of water next to a salt shaker does not count as the two being in suitable contact for the salt to dissolve. Organisms are almost never in physical contact with their environments; the only things touching me that way right now are my clothes, my chair, a small piece of floor and my keyboard. But I can behave with respect to affordances that are more distant; for the ecological account of how this works to work, we need to explain what 'suitable contact' means for affordances and effectivities. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We need a theory of information. That's next.</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com1tag:blogger.com,1999:blog-9192597712746432631.post-1370339702192087682021-10-28T18:25:00.001+01:002021-10-28T18:25:24.870+01:00Are Affordances Plausible? Updating Some Intuitions<p><span style="font-family: arial;">In the <a href="https://psychsciencenotes.blogspot.com/2021/10/what-does-it-mean-for-perception-to-be.html" target="_blank">previous post</a>, I laid out the basic distinction between direct and indirect theories of perception. The basic issue is how to get to behaviourally relevant properties. Such properties of the world have to be partly about the organism, and not just the world; not just 'mass', which is something about the world, but 'moveability' which is something about the world, relative to the organism. If there are properties like these out in the world, and we can perceive them, then perception can, in principle, be direct. If there aren't properties like these there to be perceived, we would have to invent them, and perception would have to be indirect. The ecological hypothesis has to begin by finding behaviourally relevant properties out in the world, ready to be perceived without having to be invented; we need to find <i>affordances</i>. </span></p><p><span style="font-family: arial;">The problem is that this seems to be madness, right off the bat. It seems bizarre to think that things separate from the organism could possibly have properties that have anything to do with the organism. Why would they have such things? Surely things can only have properties that are about themselves and make them what they are? </span></p><p><span style="font-family: arial;">This post will talk about some pieces of information that make this at least plausible; the next post will talk about the specific ecological hypothesis about affordances that tries to implement that plausibility. To get to that plausibility, I am going to have to update your intuitions about what is simple, and what sorts of things can be physically real, and along the way introduce some useful vocabulary I will do everything I can to use clearly. <span></span></span></p><a name='more'></a><span style="font-family: arial;"><span></span></span><p></p><h3 style="text-align: left;"><span style="font-family: arial;">1. Predicative and Impredicative Properties (A Little Vocab)</span></h3><p><span style="font-family: arial;">Where do properties live? The obvious answer is that they live in the thing that 'has' that property; that's what 'having that property' means. There are lots of properties that happily work this way; we can sensibly say that an object 'has' a certain mass and that's a complete story, no further information required. In the lingo, we can <i>predicate</i> the property 'mass' to an object. </span></p><p><span style="font-family: arial;">Other properties are more complicated. For example, 'moveability' isn't a property that we can predicate to just an object. For the full story, we need to include some information about the thing trying to do the moving. I can't move a 1-ton anvil but a crane can - same object, different mover, different moveability. 'Moveability' is an <i>impredicative </i>property, defined not just with respect to a single object but with respect to the more distributed physical system that is implementing the property. </span></p><p><span style="font-family: arial;">For 300 years or so, physics has considered predicative properties to be objective and primary, while impredicative properties are subjective and secondary. Primary properties are what's out there regardless of who's observing, and secondary properties have to be constructed by some kind of negotiation between the primary properties and observers. They are different in kind, and differ in status - primary properties are what's real, secondary properties are the kinds of weird things cognition has to invent. If this is correct, no affordances for us. </span></p><p><span style="font-family: arial;">One of the big changes to physics in the 20th and 21st century, however, has been discovering that the universe is basically made of impredicative properties. We call this quantum mechanics. For example, is a photon a wave or a particle; which set of properties can we predicate to the photon? Neither, and both; the (slightly terrifying) answer is that it depends on how you measure the photon. If you ask it wave questions, it behaves like a wave, but it you ask it particle questions, it behaves like a particle. In terms of our spoken language, this inability to assign properties to things is a worrying absurdity, but in terms of the mathematics its perfectly acceptable and, most importantly, a central reason why quantum mechanics works so incredibly well.</span></p><p><span style="font-family: arial;">So we have accomplished one thing - at least some physical properties can be impredicative and be perfectly real. But are they just weirdos living in the quantum scale? The specific mechanism that supports impredicative properties at the quantum scale only lives and works there; at that scale, all properties are indeterminate until measurement resolves the uncertainty one way or another. At larger (e.g. ecological) scales, the uncertainty is already resolved prior to measurement. However, measurement <i>does</i> affect which properties can be taken as given, and which have to be figured out.</span></p><h3 style="text-align: left;"><span style="font-family: arial;">2. The Lesson of the Polar Planimeter (<a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9450.1977.tb00274.x" target="_blank">Runeson, 1976</a>)</span></h3><div><span style="font-family: arial;">Imagine you have a pond, and you would like to know what area it has. Area is a higher-order (order 2) property; it is made of two lower-order measurements of length (order 1) that we use in a multiplication operation to make area. A priori, length is 'simpler' than area. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">However, it turns out that which of the two is the 'simple' property depends on how you are measuring. If you measure your pond with a length-ometer (e.g. a ruler), area is nowhere to be seen and has to be constructed via a mediating process. However, if you measure your shape with an area-ometer (e.g. a <a href="https://en.wikipedia.org/wiki/Planimeter" target="_blank">polar planimeter</a>), then <i>length </i>is nowhere to be seen and has to be constructed via a mediating process. Importantly, polar planimeters exist and are therefore options. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The lesson is this: how the world presents itself, which properties it makes immediately available, depends on the measurement device. The a priori analysis of 'simple' vs 'complex' is incomplete, and as in quantum mechanics, the full specification of what properties are present in what ways requires details of the measurement device.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Summary</span></h3><div><span style="font-family: arial;">Behaviourally relevant properties are a worry. It seems that by definition they aren't just properties we can predicate to things in the world, but that they are impredicative properties that are defined with respect to the organism-world system which makes them the kind of properties that have to be made by some process operating on the simpler elements. But we have learned a couple of things. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">First, impredicative properties are legitimate options for physical properties; everything at the quantum scale is impredicative and that is the single most reliable fact in science. Second, this kind of impredicativity is enabled by measurement. At the quantum scale, measurement is part of how properties are constituted, and while this part isn't true at the ecological scale, which properties of the world are 'simple' (i.e. don't need to be created) still depends on the measurement device at all scales. You can build a device that considers a higher order property as simple and the lower order properties as complex, or derived. Therefore, a measurement device that considers an impredicative behaviourally relevant property as simple is, in principle, a legitimate option. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I tried really hard to write this post without getting into quantum physics or impredicativity. But I couldn't, because as I say at the start, making things like affordances plausible requires updating a few notions about what is allowed. This post is weird in another way; I've talked about quantum properties and polar planimeters but perception is not a quantum phenomenon nor is it made out of planimeters. So bear with me; at this point I am just laying out some concrete examples to show the things I need access to are physically possible. In the next post, I will go through the theory of affordances-as-dispositions, which is the particular attempt to implement all these ideas at the ecological scale. Yes, yes, I know - there's also affordances-as-relations. I'll get into how these do not hold up as a successful implementation of what affordances have to be in order to support direct perception too.</span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com1tag:blogger.com,1999:blog-9192597712746432631.post-69445325729930605772021-10-26T14:35:00.001+01:002021-10-26T14:35:52.375+01:00What Does it Mean for Perception to be 'Direct' vs 'Indirect'?<p><span style="font-family: arial;">The ecological approach is a theory of direct perception. Put simply, direct perception proposes that our perceptual experience of the world is not mediated by anything that sits between the world and that experience. Making this viable is a big challenge, however, and the idea seems preposterous on the face of it to many people. </span></p><p><span style="font-family: arial;">In this post, I want to lay out the basic idea of direct vs indirect perception, and then explain how each approach addresses the problem of how we perceive the world. There will be some big words, but I will aim here to place them in a context that supports them usefully, and focus mainly on the straight-forward ideas in play. In a future post, I will ask the question 'what do we know about how plausible direct and indirect theories of perception are?'</span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h2 style="text-align: left;"><span style="font-family: arial;">Perceptual Experience</span></h2><div><span style="font-family: arial;">All theories of perception begin with the fact that we experience a rich, detailed world full of things we can and can't do, should and shouldn't do. Specifically, we experience a behaviourally relevant world. It's important that this experience is functional with respect to the real demands of the world; that if we act on the basis of our experience, it works and doesn't lead to us dying horribly. How might this be possible?</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><a href="https://psychsciencenotes.blogspot.com/2020/04/lecture-3-direct-perceiving-indirect.html" target="_blank">There are two basic options</a>. First, perceptual experience might be the result of a two-term relation, with 'us' and 'the world' being the two terms. This would make perception <i>direct</i>. Second, perceptual experience might be the result of an at-least-three term relation, with 'us', 'the world', and 'at least one other thing' being the terms. This would make perception <i>indirect</i>. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Indirect Theories of Perception</span></h3><div><span style="font-family: arial;">Indirect theories begin with the assumption that the world does not present itself to us in behaviourally relevant terms, and that we therefore need at least one mediating layer between us and the world in order to transform the way the world presents itself into the way we experience the world. These are broadly motivated and justified by certain theories of physics (i.e. theories of the physical structure of the world). In these theories, objects can have properties that we can assign to them (<i>predicate </i>to them) without any reference to anything else; these are <i>primary, objective </i>properties such as the mass of an object. They can also have properties that cannot be assigned just to them, and require something else in the definition; these are <i>secondary, subjective</i> properties such as the colour of an object. The objective world only contains primary properties. The problem for cognition is that primary properties are not behaviourally relevant properties; mass by itself does not define whether you can <i>move </i>something. Move-ability is, under this analysis, a secondary property. so we need an intervening process to get to these behaviourally relevant secondary properties - we must make the world a meaningful place. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Over the centuries there have been a variety of contenders for what that mediating process is, but in modern theories of perception the mediation is implemented by representations. There are many different theories that vary in their details as they try to propose a representational format that can do this successfully (i.e. in a way that doesn't lead to us dying horribly). But across all of these, the core idea of a representation is that it takes the way the world presents itself to us, and processes that information into a re-presentation of the world in behaviourally relevant terms. We then behave as a function of the re-presentation. This feature of representations is called <i>designation</i> (Newell, 1980)<i>; </i></span></div><div><div style="font-style: italic;"><span style="font-family: arial;"><br /></span></div><div style="font-style: italic;"><span style="font-family: arial;">Designation: An entity X designates an entity Y relative to a process P, if, when P takes X as input, its behavior depends on Y</span></div><div style="font-style: italic;"><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Recent formalisations of designation in the context of cognition can be seen in interface theory (e.g. Hoffman, Singh & Prakesh, 2015) and theories of predictive processing (e.g. the free energy principle, Friston & Kiebel, 2009). These propose ways in which representations can be built by a cognitive system in a way that preserves designation, and explore the consequences for perceptual experience of these construction methods. The general consequence is that perceptual experience is not a physically accurate model (because it contains secondary properties) but that this is just the cost of doing business.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Representational theories face two big challenges: explaining how adaptive representations full of useful secondary properties can be built up when the physical world provides no help on what those might be, and then explaining how we know which representation to deploy at any given time, again because the physical world provides no unambiguous help. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Theories of Direct Perception</span></h3><div><span style="font-family: arial;">Direct theories of perception instead begin with the assumption that the world <i>does</i> present itself in behaviourally relevant terms, and so we do not need any mediating layer to invent this for us. Instead, we need a mechanism for successfully engaging with that presentation. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">These theories have to begin with a different theory of of the physical world. If the world only provides primary properties, but the properties we need are secondary, direct perception is an immediate non-starter. Some people argue that the physics of primary and secondary qualities is what physics has given us to work with, and if that's the case then perception must be indirect. The main move is to reject the (17th century physics) analysis of the world into primary and secondary properties, and embrace the more recent (21st century physics) claims of theories of physics that there are instead perfectly real properties that cannot be predicated so simply. These properties of things depends on how you ask it; how you <i>measure</i> it. Such properties are referred to as <i>impredicative,</i> and historically they have been frowned upon because their existence seems to require a circular definition. However, modern physics has spent 100 years showing that impredicative physical properties are not just legitimate options but also everywhere - quantum physics is a terrifyingly successful theory of impredicative properties. The ecological approach must therefore rely on a theory of impredicative physical properties at the ecological scale. We can point to the success of quantum physics to show this isn't an immediately doomed approach, although we cannot rely on the same set of impredicative physical properties as quantum physics; we have to find them ourselves.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Ecological theories face two big challenges: identifying the physical basis that would allow the world to present itself in behaviourally relevant terms, and identifying the mechanism that supports engaging with that presentation. Our solution to the first challenge is <i>affordances</i>, and our solution to the second is <i>specifying ecological information. </i>As Gibson famously noted, identifying affordances is easy - they are definitely there. The real challenge is whether these are what we perceive - are these what information is about? (This is also the essence of the famous Fodor & Pylyshyn (1981) challenge to the ecological approach.)</span></div><h2 style="text-align: left;"><span style="font-family: arial;">Summary </span></h2></div><div><span style="font-family: arial;">This post lays out the basic problem facing all theories of perception, and the two basic solutions that have been proposed. In the next post, I will discuss how plausible these two solutions are. I want to note at this point that the plausibility question is generally only levelled at direct perception; most people find indirect perception to make more intuitive sense. But it's an important question to level at indirect perception as well, because when we do we find serious reasons to doubt that it's plausible at all. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-35839689291695326002021-09-08T11:42:00.000+01:002021-09-08T11:42:26.239+01:00Introducing...The Southampton Project<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid1COZQw7YXIQLTKjWjvzfoLCQc_yMiLNBgSBtDwoBJEmYU-iEGr_culDUq_XsljFesthlq_Y70-Pfj0Mr-d3b7JvS0E89zHHXv7wO2IUa6XK98z8lkl0qdEdl7IgM9u26vQhd1dF7ShQ/s1370/1200px-FC_Southampton.svg.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><span style="font-family: arial;"><img border="0" data-original-height="1370" data-original-width="1200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid1COZQw7YXIQLTKjWjvzfoLCQc_yMiLNBgSBtDwoBJEmYU-iEGr_culDUq_XsljFesthlq_Y70-Pfj0Mr-d3b7JvS0E89zHHXv7wO2IUa6XK98z8lkl0qdEdl7IgM9u26vQhd1dF7ShQ/s320/1200px-FC_Southampton.svg.png" width="280" /></span></a></div><span style="font-family: arial;">I am embarking on a very exciting and potentially game-changingly huge collaboration this year, with Southampton Football Club and a bunch of ecologically minded researchers. The goal is to develop a club-based Learning Lab centred on ecological approaches to skill acquisition and coaching, and to embed this Lab in an ecologically minded club-wide ecosystem. I'm working already with a lot of really good people, and key people in the club are fully on board with this (which is why it's now something rather than nothing!) </span><p></p><p><span style="font-family: arial;">In this post, I want to talk about how it came about, what I've been up to over the past year to get it moving, where we are now and where I want to take this in the near future. In line with my <a href="http://psychsciencenotes.blogspot.com/2016/04/the-art-of-new-collaboration.html" target="_blank">general theory of how to get a new collaboration up and running</a> my initial focus has been on getting all the members involved in doing something we can achieve now, with the resources we have at hand, so that we can be moving and doing the things required to target additional funding and resources. In Phases 2 & 3, I want to open things up a little to begin getting those additional resources in place; if you are interested in being involved, <a href="mailto:a.d.wilson@leedsbeckett.ac.uk">feel free to drop me a line</a>.</span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h2 style="text-align: left;"><span style="font-family: arial;">How We Got Moving</span></h2><p><span style="font-family: arial;">Several years ago, I was contacted by Malcolm Frame, who now heads up the Psychology team at SFC. He's been interested in ecological and embodied theories for a long time, and he collects contacts with people like me. We had some good chats, swapped some papers, and thought of a few ideas. I went to visit the club in 2019, got shown around, and had some good chats with key people in the club. By the end of the day, Malcolm and I had ideas for three projects that would be worth trying to turn into something, although we had no real plans yet about how to actually make them happen. </span></p><p><span style="font-family: arial;">Then a pandemic kicked off, so nothing happened for a while. Stupid pandemic. </span></p><p><span style="font-family: arial;">In September 2020 I was back in the office, things were still weird but at least were more predictable, and so I dropped Malcolm a line to see if we could pick up where we had left off. Turned out we'd gotten lucky. The club had made several appointments of very ecologically minded people (in particular Iain Brunnschweiler and Mark Jarvis) to key leadership positions, and all of a sudden our ideas were landing on fertile soil. </span></p><p><span style="font-family: arial;">(Let me note a key point here in this era of knowledge transfer grants and pushes for collaborations with industry - if Mark and Iain weren't already in post, we would not have been able to create anything yet. Developing these kinds of collaborations depend very heavily on buy-in from the key industrial partner, and in my experience it' very hard to create that buy-in from the outside.)</span></p><p><span style="font-family: arial;">So, suddenly, I was in a situation where things could actually kick off (pun intended :) So I do what I always do - I went looking for people who could help and asked if they wanted to come play. </span></p><h2 style="text-align: left;"><span style="font-family: arial;">Building the Initial Team</span></h2><div><span style="font-family: arial;">Malcolm and I had decided that there were three key projects that were the right things to pursue first. First and foremost, we needed a project about coaching ecologically; coaching development is obviously the main interest of the club. Second, we wanted a project about ecological/embodied cognition approaches to sports psychology (this is what Malcolm has been trying to build during his time at SFC and what the club was keen to support him in). Third, we (well, I) wanted a project around ecological approaches to skill acquisition, because that's a research topic I can get my teeth into and it's also another key part of the ecological approach. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The goal of these projects was to get us moving and start producing outputs we could show to the club and say 'hey look, we're achieving things'. In addition, they would be vehicles for establishing formal collaborations between the club and various universities. Leeds Beckett, where I am, is very hot on matched-funding PhDs just now as this sort of vehicle, so I applied for and got approval for one based with me. I'm not a coach or a sports psychologist though, so I needed better people to help lead. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Over the years I've developed a network of ecologically minded researchers and coaches, mostly via Twitter, being on <a href="https://perceptionaction.com/">Rob Gray's podcast</a>, and chats with people. I reached out and started getting names of good people to talk to, and then I spent Sept-Dec 2020 reaching out to people and chatting. Turns out people were all lovely and all very interested; people connected me with other people, and then other people, and after speaking to everyone I locked us down to developing three matched-funding PhD positions, one for each project. We were also able to set in motion an initial research project at the club that could get up and running right away.</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Phase 1</span></h3><div><span style="font-family: arial;">The first half of 2021 was spent identifying specific funding for the PhDs and getting moving on the first project. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The initial research project is a qualitative ethnographic study of the current state of the club culture. We were able to set this up quickly thanks to the involvement of Mark O'Sullivan, who has done this work at a club in Sweden. The framework is called Learning in Development (<a href="https://www.tandfonline.com/doi/full/10.1080/13573322.2021.1966618" target="_blank">O'Sullivan, Vaughan, Rumbold & Davids, 2021</a>) and essentially, the first thing to do is figure out where the club and it's people currently are in their approaches. Mark is leading this along with Will Roberts and Keith Davids, and I was able to source funding from my Centre for Psychological Research (PsyCen) to pay for an RA and other costs. This is happening now, and I will blog it in more detail once I get better at understanding qualitative ethnography :) Do read Mark's paper though, he's been producing some great stuff lately and he's a real asset to my team. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We have three PhD positions set up; two should be starting in October 2021, and one in January 2022. </span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">Ecological approaches to sports coaching: interviewing now for an October start. This will be directed by Will Roberts at the University of Gloucestershire, with help from Keith and Iain. Huge thanks to the Sport and Exercise department there for working hard to find ways they could help fund the PhD.</span></li><li><span style="font-family: arial;">Ecological approaches to sports psychology: this will get re-advertised shortly for a January start. It will be directed by Amanda Wilding out of Bournemouth University, with funding from Malcolm's psychology department at SFC. </span></li><li><span style="font-family: arial;">Ecological approaches to virtual reality in sports: This is the one I am running, with matched funding from Leeds Beckett (thanks to my Director of Research Brendan Gough for all his support). I've just confirmed David Alder from Carnegie will be the second supervisor, and I've appointed a strong candidate to the role, John Connolly, who starts 1 October. We'll be working with Future Performance Technology, in particular CEO Ciaran Toner, and using their VR platform. </span></li></ol><div><span style="font-family: arial;">So Phase 1 is effectively complete: we are up, rolling, and we have got literal buy in from the club and from multiple universities. </span></div></div><h3 style="text-align: left;"><span style="font-family: arial;">Phases 2 and 3</span></h3><div><span style="font-family: arial;">My main job now is to build on this rapid success and create something that is sustainable and able to grow. I am therefore drafting plans for the next Phases. Details to come, but the basic plan is this:</span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">Phase 2 is about funding the next round of research that builds off the three themes we have established with the PhD projects. The key thing we have now that will hopefully enable good chances at funding is access to the club and it's people and resources; that's the kind of thing that looks good in grants these days. I'm going to be fairly selective at this stage, and focus on projects coming from the core team I have established, at least in the first place. My reason is not to be evil and keep all this to myself; Phase 2 needs to enact the overall strategic goal of creating an Ecological Learning Lab, in which the research is all aligned with itself and with the broader club culture. I need to make sure I prioritise projects that fit this strategic goal. </span></li><li><span style="font-family: arial;">Phase 3 is when I want to open things up some more. Once we have a literal place in the club that is active in research, integrated into the club's life, and sufficiently resourced, I want us to become a place where people can come to us with ideas for research that need access to what we have to get funded. I want this project to become a hub for a wide community of ecologically minded researchers and practitioners.</span></li></ol><h2 style="text-align: left;"><span style="font-family: arial;">Summary</span></h2></div><div><span style="font-family: arial;">I'm so excited about this project, I can hardly even. The positive energy and wild enthusiasm from everyone who's involved so far has been a blast of fresh air, and the fact that the club (in the form of key people such as Mark Jarvis) is deeply interested in doing this right and fully on board with the ecological mindset is just amazing. I also want to really thank my internal research support resources, my Director of Research and PsyCen - they've been generous with support and money and are a critical part of why this is working. We're going to do some good things!</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I will admit I'm nervous as hell though. This is the biggest thing I've ever tried to manage, and although it's something I really want to do it makes me sweat a bit. I'm doing a course on research team management (thanks Leeds Beckett staff development!), I'm working with really smart and committed people, and I have a lot of clear ideas. But the scale of this is intimidating, to be honest, and I don't think that feeling is going anywhere soon. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Much more to come! Please stay tuned to the blog and <a href="https://twitter.com/PsychScientists" target="_blank">my Twitter feed</a> for more, and setting up a proper social media presence for the project is on my to-do list. If you are interested in being involved in any way, <a href="mailto:a.d.wilson@leedsbeckett.ac.uk">please feel free to drop me a line</a>. I'm always happy to chat and find ways to bring people into the fold, although that may not be anything immediate. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-13237628060176884962021-05-20T09:33:00.003+01:002021-05-20T09:35:55.252+01:00Structured Flow on Neural Manifolds (Jirsa et al, 2019)<p><span style="font-family: arial;">As I try to develop a vocabulary for an ecological neuroscience, I am looking for two things. First, I'm looking for help from existing methods to help identify real neural parts and processes; so far I've ruled the FEP out for that. Second, I'm looking for an existing dynamical framing to help express whatever ecological psychology figures out about the brain. The jury is still out on whether the FEP is that framing; another option is a development of Kelso's coordination dynamics that invokes <i>structured flow on manifolds</i>. (This paper by <a href="https://osf.io/pszgx/download" target="_blank">Jirsa, McIntosh & Huys</a> was a contribution to the special issue on <a href="https://www.tandfonline.com/toc/heco20/31/3" target="_blank">Ecological Neuroscience</a>). </span></p><p><span style="font-family: arial;">As I review the paper, I am going to be trying to figure out if this mathematical framing is going to help. It's not going to be a guide to real neural parts, but it might be the right way to formally describe the real parts we identify by other means. </span></p><p><span><span style="font-family: arial;"></span></span></p><a name='more'></a><p></p><p><span style="font-family: arial;">The first thing that Jirsa et al note is that your theory of brains simply must be grounded in the right description of behaviour. <a href="http://psychsciencenotes.blogspot.com/2021/05/mechanistic-models-of-brain.html" target="_blank">I agree</a>, as did all the contributors to the special issue and has become a fairly mainstream idea now (see the important paper by <a href="https://www.sciencedirect.com/science/article/pii/S0896627316310406" target="_blank">Krakauer et al, 2017</a>). They lean heavily into the coordination dynamics of Kelso, Haken, and others as being the right characterisation of the behavioural scale (there are some issues here, specifically <a href="http://psychsciencenotes.blogspot.com/2016/06/do-dynamic-models-explain-mechanismweek.html" target="_blank">whether these dynamical models can explain</a>, but Jirsa et al do a good job of applying the work carefully). </span></p><p><span style="font-family: arial;">Broadly, they work carefully towards a formal language for describing neural states that is intimately connected to the dynamics of the behavioural scale. Where they end up also entails taking a task-specific approach, rather than the more broad strokes Kelso dynamical modelling approach. and they decide that this should be a feature and not a bug. Again, I agree - task specificity is a key move. </span></p><h2 style="text-align: left;"><span style="font-family: arial;">Structured Flows on Manifolds</span></h2><div><span style="font-family: arial;">Behavioural systems are very, very high dimensional; we have redundant degrees of freedom at pretty much every scale and this issue is even more pronounced at the neural scale. The task-specific analysis says that we need to identify a way to take this system and temporarily softly-assemble it into a much, much lower dimensional system that can successfully operate in the current task and that can also be controlled effectively. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">(This should all sound familiar: it is the essence of Bingham's task-specific devices, and it is the driving principle behind all the motor abundance analysis methods such as <a href="http://psychsciencenotes.blogspot.co.uk/2016/09/uncontrolled-manifold-analysis.html" target="_blank">Uncontrolled Manifold Analysis</a>, <a href="http://psychsciencenotes.blogspot.co.uk/2016/10/nonlinear-covariation-analysis-muller.html" target="_blank">TNC analysis</a>, and <a href="http://psychsciencenotes.blogspot.com/2016/10/optimal-feedback-control-and-its.html" target="_blank">optimal feedback control</a> theory. I don't think any of this coincidence is an accident, and I think that joining the dots across multiple scales will rely on this kind of maths - this is why SFM is interesting to me!)</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">In this formalism, dimension reduction to a manifold gives you <i>flow</i> at two time scales. The slow time scale is the flow field that defines all the possible ways behaviour can unfold across the manifold. The fast time scale is the flow field that defines attractor states that govern particular instances of behaviour allowed on the manifold. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This is where task-specificity becomes key - it's not possible to define a manifold that defines all possible behaviours the organism can ever possibly do, but it is possible to define a manifold that defines all the possible ways a specific task might unfold. Again, this is the key to making UCM etc work! A second implication is that these manifolds must be transient; they must be created and annihilated and recreated as the task demands swing in and out and back in. This is, I suspect, going to connect nicely with Anderson's <a href="http://psychsciencenotes.blogspot.com/2019/10/show-me-talon-thoughts-on-raja-anderson.html" target="_blank">Transient Assembled Local Neural Subsystems (TALoNS)</a>; more on that later. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Some other nice features come with this formalism that suit an ecological neuroscience. First, there are natural ways to work information into the formalism as the driving force for the fast dynamics. Second, redundancy/degeneracy are baked in; you can get topologically equivalent slow timescale flow (task specific device) when, say, the same task requires coupling to visual vs auditory information sources. This accords nicely with some recent empirical work I'm doing on task dynamics, which I will get into later. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Summary</span></h2><div><span style="font-family: arial;">Jirsa et al wrote a really good paper. They did an amazing job keeping the math doing the modelling distinct from the system being modelled (for example, they explicitly note that this formulation is not the only option, but then provided reasons to prefer it given the needs of an ecological neuroscience. Not doing this has been a big criticism of the FEP.). They worked very hard to develop their formalism with respect to the ecological behavioural scale; in other words, they took their own challenge to heart and grounded their neuroscience in behaviour. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">At this point, my tentative conclusion is that this framework will not help pick out real neural parts, but that it could very well provide the mathematical vocabulary required to implement the real neural parts (such as TALoNS?) we find by other means. </span></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com3tag:blogger.com,1999:blog-9192597712746432631.post-71809325389782433192021-05-17T14:11:00.001+01:002021-05-17T14:11:13.665+01:00Do Markov Blankets Give Us Real Neural Parts?<p><span style="font-family: arial;">In my <a href="http://psychsciencenotes.blogspot.com/2021/05/mechanistic-models-of-brain.html" target="_blank">last post</a>, I laid out what I think the rules are for developing a mechanistic model of the neural scale contribution to behaviour. I ended there with a question: what counts as a real neural part? How can we successfully decompose neural activity supporting a given perception-action loop into parts? </span></p><p><span style="font-family: arial;">In this post, I want to discuss one potential option: the hypothesis that Markov blankets, a key feature of the free-energy principle approach to neuroscience, can identify and pick out real neural parts. I'll discuss some recent ecological critiques of Markov blankets and some potential answers to the challenges.<span></span></span></p><a name='more'></a><span style="font-family: arial;"><span></span></span><h2><span><span style="font-family: arial;">Markov Blankets</span></span></h2><div><span><span style="font-family: arial;">Markov blankets are a formalism of the boundary of a state of a system. The boundary for a given state S comprises states that directly affect S, states directly affected by S, and states directly affecting states directly affected by S. This boundary effectively picks out what in a system S is part of, and what it is not part of. </span></span></div><div><span><span style="font-family: arial;"><br /></span></span></div><div><span><span style="font-family: arial;">In practice, this blanket is identified from an adjacency matrix, in which all the parts of the system are listed down the columns and across the rows. Each cell represents the degree with which these parts covary in their activity; a 0 in the cell means those parts are statistically independent of each other. </span></span></div><div><span><span style="font-family: arial;"><br /></span></span></div><div><span style="font-family: arial;">So Markov blankets seem to be about parts of systems; about finding which pieces work <i>as</i> pieces and which work separately. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">The Emperor's New Markov Blanket Trick</span></h2><div style="text-align: left;"><span style="font-family: arial;">Two recent papers have raised concerns about the relationship between Markov blankets and real parts. Both hinge on an issue raised by Mel Andrews in their paper '<a href="http://philsci-archive.pitt.edu/18974/" target="_blank">The Math is Not the Territory</a>'. Andrews sensibly notes that there is the FEP model, and there are the system being modelled by the FEP. It's important to identify whether a feature of an FEP application is a feature of the model, or a feature of the system being modelled, and conflating these is an error. </span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><div style="text-align: left;"><span style="font-family: arial;">The first critique comes from <a href="https://t.co/or0w51rfvM?amp=1" target="_blank">Bruineberg et al (2021)</a>. Like Andrews, they describe two different uses of the Markov blanket concept. The first (which they call Pearl blankets, after the person who developed them) treats them as features of the model. Here they are a reliable and effective tool that do what they say on the box, but they do not license much useful inference about the structure of the actual system. They then note a second use (which they call Friston blankets) in which the Pearl blanket identified by the model is promoted to a feature of the system being modelled, so as to allow more metaphysically interesting talk. Like Andrews, they cite this as an error, but unlike Andrews, who was targeting critics of the FEP, Bruineberg et al say that FEP proponents are the ones making this error in order to make their FEP analyses more ambitious in scope.</span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><div style="text-align: left;"><span style="font-family: arial;">The second critique comes from <a href="https://t.co/8577XeLyhC?amp=1" target="_blank">Raja et al (2021)</a>, who claim that Markov blankets are a trick that lives in the modelling strategy. They note that the Markov blanket formalism does not work on every system, that the formalism by itself does not serve as a guide to discovery of real system boundaries, and that there are important features of cognitive systems that fall outside the boundaries of what Markov blankets can do. Given these limitations, they wonder, why use Markov blankets to pick out system boundaries? They then note that the only real reason is that this mathematical formalism allows one to treat any FEP system as a Bayesian inference system, which the system being modelled may or may not be. </span></div><div style="text-align: left;"><span style="font-family: arial;"><br /></span></div><div style="text-align: left;"><span style="font-family: arial;">These challenges place Markov blankets as features of the model, and not of the system being modelled, and point to examples in the FEP literature where this distinction is conflated in problematic ways. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Predicting Markov Blankets</span></h2><div><span style="font-family: arial;">One defence of the FEP to these challenges comes from some recent work. The first paper (<a href="https://www.sciencedirect.com/science/article/pii/S0149763421000579" target="_blank">Hipólito et al, 2021</a>) is a theory paper that formally predicts you should find Markov blankets in fMRI data. The second (<a href="https://direct.mit.edu/netn/article/5/1/211/97535/Parcels-and-particles-Markov-blankets-in-the-brain" target="_blank">Friston et al, 2021</a>) is an empirical paper that does, indeed, find Markov blankets in fMRI data. <a href="https://twitter.com/mjdramstead/status/1393178343747506180?s=20" target="_blank">Ramstead</a> has suggested this counts as empirical evidence that Pearl and Friston blankets are, in effect, the same thing.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">However, these data do not resolve the issue. The theory paper does not predict specific Markov blankets, just that there should be some in an FEP analysis of the fMRI data. Then the empirical paper also does not find specific Markov blankets in the fMRI data, just Markov blankets. That paper does not show that the blankets they found correspond to any actual neural structures (although <a href="https://www.sciencedirect.com/science/article/pii/S0149763420306643" target="_blank">a computational paper</a> suggests they might be able to do so). Given the deep mathematical relationship between FEP and fMRI analyses (Friston invented both, remember), and given the reliance on computational proof-of-concept, it seems clear that these are Pearl blankets (features of the model) and there is, as yet, no evidence that they are also Friston blankets (features of the system to be modelled). Worse, I think Ramstead's argument that this refutes the above critiques just walks right in the problems they are pointing out. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Do Markov Blankets Give Us Real Neural Parts?</span></h2><div><span style="font-family: arial;">Based on the evidence to date, and the presence of as yet unanswered real concerns about whether the broad inferences drawn from FEP analyses about real systems are legitimate, it seems clear to me that Markov blankets do not identify real parts and processes of the system being modelled. They are features of the model, instead; interesting tools, but not explanatory. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><a href="http://psychsciencenotes.blogspot.com/2021/05/mechanistic-models-of-brain.html" target="_blank">Based on my previous post</a>, I have another straight-forward reason to doubt that Markov blankets give us real parts. In that post, I argued that the grounding scale for explanations of behaviour is the scale of the organism-environment system, specifically the scale of the information that allows these two to be coupled into a single system. The FEP was developed entirely separately from considerations of the behavioural scale, and instead is about the neural scale (for the study of behaviour, anyway). But work done without reference to the grounding scale will <i>never</i> reveal the real parts of the system! Effectively, the FEP currently is like a scheme for decomposing an alarm clock with a hammer, instead of a screwdriver. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Can we save the FEP? Can we start over, use the ecological analysis of a given perception-action loop, and then formalise the neural scale contribution using the formalism of the FEP? This was essentially the recommendation of earlier Bruineberg work (blogged <a href="http://psychsciencenotes.blogspot.com/2016/11/free-energy-how-fck-does-that-work.html" target="_blank">here</a> and <a href="http://psychsciencenotes.blogspot.com/2019/09/can-free-energy-principle-be-made.html" target="_blank">here</a>). They were looking at whether the ecological or inferential framing of the FEP was best. The fact that the FEP licenses both interpretations is evidence that is does not come with an ontology (Raja et al note this too), and Bruineberg and colleagues defend the argument that the ecological ontology is the best one for the FEP to use. At the time I noted that my take was that it was clear the FEP needed ecological psychology, it remains unclear whether the ecological approach needs the FEP. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Overall, I remain unconvinced that the FEP will be the best way for an mechanistic ecological neuroscience to go, but I remain open to the idea that it is an option. Work to be done!</span></div><p></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0tag:blogger.com,1999:blog-9192597712746432631.post-55534500839344917322021-05-17T10:40:00.005+01:002021-05-17T10:40:54.148+01:00Mechanistic Models of the Brain<p><span style="font-family: arial;">I'm getting increasingly interested in neuroscience, and how to make it ecological. I also think that the ecological approach is capable of supporting mechanistic explanatory models of behaviour and is the correct scale* at which to ground these models. This means that my current plan is to find a way to add neuroscience as a lower scale part of a model grounded at the scale of the organism-environment system. </span></p><p><span style="font-family: arial;">There's a lot going on in that sentence, though, so I want to unpack it a bit to lay out the rules. and the things I currently don't know. </span></p><p><span style="font-family: arial;">(*NB I am using <i>scale</i> rather than <i>level</i> throughout because <a href="https://twitter.com/APotochnik/status/1392136522036662275?s=20" target="_blank">the concept of a level is complicated</a> and currently, <a href="https://link.springer.com/article/10.1007/s10539-014-9461-z" target="_blank">I am convinced that scale is a better term</a>. The argument continues, however). </span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h2 style="text-align: left;"><span style="font-family: arial;">Mechanistic Models</span></h2><div><span style="font-family: arial;">There are several kinds of model, and these explain to varying degrees; <a href="http://psychsciencenotes.blogspot.com/2016/06/mechanismweek-week-of-posts-commencing.html" target="_blank">I've mapped all this out before</a>. A mechanistic model is one that contains terms that explicitly represent real parts and processes of the system being modelled. Typical cognitive models don't have these; they trade in functional descriptions of system capacities, like 'working memory' or 'the central executive'. <a href="https://cognitioninaction.files.wordpress.com/2019/10/golonka-wilson-2019-ecological-mechanisms-in-cognitive-science.pdf" target="_blank">Sabrina and I have argued that the ecological approach can do mechanistic models</a>, because we can model real parts and processes such as affordances, limb dynamics, and information variables - things that really exist. Because mechanistic models are 'how-actually' explanations rather than 'how-possibly' descriptions, we suggested ecological psychology should do them because we could. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This puts us at odds with the rest of ecological psychology, but only because of a couple of errors we haven't managed to correct yet. First, we have a tradition of explicitly rejecting mechanistic explanations (see <a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-5-mechanistic-hypothesis-turvey.html" target="_blank">Turvey, 2019, Lecture 5</a>), but this is actually a rejection of the Cartesian mechanistic ontology. The neo-mechanist literature is not about ontology, but epistemology, and has no commitment to Descartes. Second, there is the <a href="http://psychsciencenotes.blogspot.co.uk/2016/06/do-dynamic-models-explain-mechanismweek.html" target="_blank">Chemero and Silberstein argument</a> that because we study complex nonlinear dynamical systems, we cannot successfully decompose the system into parts we can readily localise. they suggest we treat the dynamical models as explanatory. There is now quite a literature on how and why dynamical systems models are not explanatory, and in our paper Sabrina and I provided the example of Bingham's coordination research and model as a proof-of-concept that you can, in fact, decompose and localise parts of nonlinear dynamical systems. So the net result for me right now is that we can be mechanistic in our modelling, and because of all the benefits, we should be mechanistic in our modelling. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Mechanistic Research</span></h2><div><span style="font-family: arial;">Mechanistic models need real parts and processes to model. This means you need to do some research to figure out what the real parts and processes of the system to be modelled are. This means you need to take the system, <i>decompose </i>it into hypothesised parts according to some scheme, and test to see if these are, in fact, part of the system. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">System decomposition is a heuristic process, and the only real rule is 'keep trying to decompose the system into parts until you find a scheme that yields real parts'. I can decompose an alarm clock with a hammer or a screwdriver. The former will yields parts but not parts that account for the functioning of the intact clock. The latter is more likely to yield parts that function in ways that do feature on the intact clock. This is the empirical measure of which decomposition scheme is best given the phenomena to be accounted for. I call this scheme the <i>grounding scale</i> for the model.</span></div><h2 style="text-align: left;"><span style="font-family: arial;">The Grounding Scale</span></h2><div><span style="font-family: arial;">As I noted, there are many ways to decompose a system into parts, but only one way that produces parts relevant to the phenomena to be explained. Bechtel has a famous example about explaining fermentation. Initially, people tried to explain the phenomena of fermentation in terms of the atoms and molecules involved; later, they tried at a higher scale, invoking fermentation-like processes. Both of these attempts failed to find real parts and processes relevant to fermentation. So someone invented biochemistry, which trades in large molecules with properties that emerge at that scale, and used those to successfully explain fermentation. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This story has two valuable lessons. While the biochemical parts are indeed made of the smaller atoms and molecules, the biochemical scale was where fermentation-relevant parts and processes live. The lower scale of atoms was structuring but also being structured by the demands of the biochemical scale. The lower scale parts were doing their work with reference to the demands of the higher scale; that higher scale explains why the lower scale parts were doing what they were doing. This is what makes the biochemical scale the grounding scale, and it's also why finding the grounding scale is so important. <b>Mechanistic models are phenomena-specific, and they are not intrinsically reductionist; in fact, they are typically multi-scale, but with one scale (rarely if ever the lowest scale) privileged as the grounding scale. </b></span></div><h2 style="text-align: left;"><b><span style="font-family: arial;">The Grounding Scale for Mechanistic Models of Behaviour</span></b></h2><div><span style="font-family: arial;">The ecological approach comes with a very specific hypothesis about this, which is as follows.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The world is a set of material things with dynamical properties. One subset of material things with dynamical properties is a system we call an organism. In order to continue as an identifiable subset, that organism has to couple itself to other subsets of material things with dynamical properties in it's environment. That coupling is implemented via information and the perception-action loop it enables. Behaviour is what happens when a given perception-action loop is up and running. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The hypothesis is therefore that <b>the unit of behaviour is the organism-environment system, making this the grounding scale</b>; everything the organism does with it's dynamical properties operating at various scales is organised with respect to the demands of this scale. Information, the coupling term, picks this out as the grounding scale. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Brains are a subset of the material things that make up the organism and have characteristic dynamical properties. But how they organise is constrained by the organism-environment system grounding scale, so our title,</span></div><h2 style="text-align: left;"><b><span style="font-family: arial;">Mechanistic Models of the Brain</span></b></h2><div><span style="font-family: arial;">is an error; what I am actually pursuing is </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Multi-Scale Mechanistic Models of Behaviour, Grounded at the Organism-Environment Scale But Now Also Including the Neural Scale</span></h2><div><span style="font-family: arial;">That was too long to fit at the top, however, and we had to do some work to get here :)</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This post has some big implications;</span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">The ecological approach provides a disciplined guide to identifying the grounding scale, which is how I can say</span></li><li><span style="font-family: arial;">there is no such thing as a mechanistic model of just the brain. There can only be mechanistic models of the neural scale of the mechanism implementing an organism-environment system. We are not in the reductionist business</span></li><li><span style="font-family: arial;">This is another way of articulating and defending the claim that neuroscience needs behaviour; it's not just a good idea, it's absolutely required</span></li><li><span style="font-family: arial;">Mechanistic models are task-specific, which is why a grounding scale is required - we need a way to articulate what a task is! </span></li></ol><div><span style="font-family: arial;">But this post also raises some important questions in what I hope is a useful way;</span></div></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">Given they must come together in service of a particular perception-action loop, what sort of real parts and processes can a nervous system form? Is a network a real part? Are the elements within a given Markov blanket (in the free-energy principle model) real parts? <a href="http://psychsciencenotes.blogspot.com/2019/10/show-me-talon-thoughts-on-raja-anderson.html" target="_blank">TALoNs</a>? These are all candidates, but we can now test them with reference to a model of a behaviour at the grounding scale. </span></li><li><span style="font-family: arial;">The nervous system is the material part of the organism with dynamical properties that can <a href="http://psychsciencenotes.blogspot.com/2011/08/what-does-brain-do-pt-2-fast-response.html" target="_blank">operate at very fast (millisecond) timescales</a>, as well as other, slower timescales. Specifically, the nervous system can organise but then also reorganise itself into <i>different </i>parts and processes very quickly (cf the way your hand can organise into a gripping system but then reorganise into a punching system, for example). This means we need to accept transient, softly assembled parts and processes as real (but also again demonstrates the need for the grounding scale to understand why the nervous system is doing what it's doing). </span></li></ol><div><span style="font-family: arial;">I'll start thinking about real neural parts in upcoming posts. </span></div></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com1tag:blogger.com,1999:blog-9192597712746432631.post-87753785974012783822021-02-02T15:04:00.000+00:002021-02-02T15:04:20.174+00:00The Constraints-Based Approach to Teaching in the Classroom<p><b style="font-family: arial;">If you read this and think '<i>hey, this sounds like something they do in </i>[insert teaching method here]', please let me know. I've had some chats about the Montessori method, and there's certainly overlap there. But I'm on the hunt for a literature I can connect to, and any help would be appreciated.</b></p><p><span style="font-family: arial;">I've been thinking a lot about education lately. I'm home-schooling the kids, I've been chatting a lot with coaches about ecological approaches to their teaching (most publicly <a href="https://www.youtube.com/watch?v=y-SzVXcCXHQ" target="_blank">here</a> and <a href="https://www.youtube.com/watch?v=zfHJVVTwtTU" target="_blank">here</a>), and I'm reading Tim Ingold's <i><a href="https://www.routledge.com/Anthropology-andas-Education/Ingold/p/book/9780415786553" target="_blank">Anthropology And/As Education</a></i>. I'm also wondering why the demonstrated success of the ecological dynamics approach in sports pedagogy has had zero consequences for education more broadly. </span></p><p><span style="font-family: arial;">I think a couple of things. I think the reason why ecological dynamics hasn't spilled over is that we live in a dualist world where <b>knowledge</b> and <b>physical skills</b> are two distinct domains (think about how physical education is treated in schools). I also think that because the ecological approach doesn't endorse that dualism, there is simply no reason for classroom education to work completely differently from physical education. And finally, I think this might be really, really important.</span></p><p><span style="font-family: arial;">I used to teach a module called Foundation Research Methods, and after a while I finally realised that I was teaching it in a constraints-based, ecological dynamics style. (This explains why a lot of my colleagues were genuinely confused by what I was doing at times, I think!). The module developed over the years, and the last year I taught it we solved our attendance problem <i>and the students crushed the exam</i>. </span></p><p><span style="font-family: arial;">I want to walk through what I did, and reflect on how it embodied an ecological approach. This is <i>not</i> me saying this is how all classes should be taught. This is just me laying out what a constraints-based approach looked like in the class, what I thought worked, and what I would like to have done next. </span><span style="font-family: arial;"> </span></p><h3><span style="font-family: arial;"><a name='more'></a>The Class</span></h3><div><span style="font-family: arial;">The module was called Foundation Research Methods. It was a first semester, first year module for Psychology undergraduates, and it led them through the basics of study design and data analysis (from means and standard deviations to one-way ANOVA). My goal was to take 180-200 students with a mix of experience in statistical thinking and get them all to the point where they could all be on the same page and build on their skills in the follow-up Intermediate and Advanced Research Methods modules. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Like most psychology statistics classes, this had originally been taught with hour long lectures for the whole group, separate workshops to practice SPSS, and practical sessions on study design and research ethics. I actually inherited the design I'll tell you about from Sabrina, who rebuilt the module before a maternity leave. I took the module over and ran it for 5 years. </span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9raKO4AcBfXmaFNL_rYjE7Yhyphenhyphenwhs5zVCvu5lgMZtSeb_boNpey56Rf39LO8CYtRF_U315OJudmczOSTg3OmvIsZZiEomY6v20Jnq8cY2DU8Q9rPYj8DUz-AHFiTIl5db6eXi54HlIkXM/s2048/Weekly+Schedule+2018.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="1107" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9raKO4AcBfXmaFNL_rYjE7Yhyphenhyphenwhs5zVCvu5lgMZtSeb_boNpey56Rf39LO8CYtRF_U315OJudmczOSTg3OmvIsZZiEomY6v20Jnq8cY2DU8Q9rPYj8DUz-AHFiTIl5db6eXi54HlIkXM/s320/Weekly+Schedule+2018.JPG" width="320" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Figure 1. The organisation of FRM in 2018</span></td></tr></tbody></table><div><span style="font-family: arial;">The new design worked as follows, over 12 weeks:</span></div><div><ul style="text-align: left;"><li><span style="font-family: arial;">Week 1 I gave them all an introductory lecture where I explained the module and the way it would run</span></li><li><span style="font-family: arial;">Every week there were 5 or 6 identical 90 minute sessions for 30 students per session. I usually taught 3 or 4 of these, with a GTA running the remainder. Sessions took place in a computer lab.</span></li><li><span style="font-family: arial;">The first 30 minutes or so was taken up by me giving a mini-lecture in which I set the scene for the day's activities. I'd introduce the topic, and provide information and context about why we were doing this and how it fit into the developing module. </span></li><li><span style="font-family: arial;">The next 60 minutes had the students work on a task based on the day's topic that had them using SPSS (and eventually JASP as well) and data sets. </span></li><li><span style="font-family: arial;">There were also 4 weeks where there were multiple small group practical sessions, where I had them draft and mark components of lab reports and engage with research ethics basics. </span></li><li><span style="font-family: arial;">Assessment varied over the years; there was generally an MCQ to assess basic knowledge, there was sometimes a lab report based on an experiment they had taken part in, and sometimes there was a reflective report where they took part in studies for credit and wrote about what the study design was like to experience as a participant. </span></li></ul><h3 style="text-align: left;"><span style="font-family: arial;">General Principles of Delivery</span></h3></div><div><span style="font-family: arial;">Each week built on the previous weeks. Week 1 covered measures of central tendency, then Week 2 did that again and added standard deviations, then Week 3 did those plus various kinds of variables, and so on. The workshop activities embodied this structure too - every week the activity would have them do everything, in order, that they had already covered, and then added that week's new thing. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The lectures did not ever tell them how to, say, run a t-test. Instead, I would explain what t-tests were for, the nature of the question that they could answer, and how they fit into the general statistical framework I was building over the module. The workshop activities also never told them how to run the t-test. Instead, activities included finding where in the textbook (Andy Field's Discovering Statistics Using SPSS) they could find step-by-step instructions, and linking to YouTube walkthroughs on running the test in SPSS. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">A GTA and I would wander the lab, answering questions and keeping the students on the right path. Again, we would never tell them how to run SPSS. Instead, we would ask them questions such as 'did you find the page in the textbook yet?' and if not, help them with that. If they had questions about vocabulary that was just blocking their progress we'd help directly, but generally our job was to keep them constrained to working within the range of the day's activity. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I also actively encouraged the students to work together in groups. If one had solved a problem, I would often direct a student with that question to the other student. I'd encourage them to share resources, on the premise that success in the module was not a zero-sum game. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">How It Went</span></h3><div><span style="font-family: arial;">The students were always a bit nervous at the start, but this was more 'new psychology undergrads scared of statistics' than 'confused by the module format'. I had the advantage that I was one of their first modules, so I got to set the rules and didn't have to fight against experience with other university-level modules. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Every year, something didn't work, but every year, I fixed that problem and it never came back. These problems were mostly student experience problems, rather than content problems. One example was that I saw their progress, but they couldn't. Every week they experienced running into something they didn't know how to do, and as a result every week it felt like they weren't getting anywhere. From my perspective, though, I could see their progress - every week the class would be quiet as they cracked on and did all the parts of the activity that was material from previous weeks, and they would only start talking and asking questions when they hit the new thing. After I realised what their first-person experience was, every year following I made a point to explicitly draw their attention a few times to their progress, and that problem never repeated. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Most years, the MCQ performed just fine; a nice normally distributed grade curve centred on 58-62% (in the UK this is the boundary between a 2:2 and a 2:1, so it's a sensible mean). My final year teaching this, though, we solved our attendance problem. The course team as a whole had been working on this for a while, and nothing had worked until the year we took an attendance register for all small teaching sessions. I literally called out names and marked people present or absent, like school. Because FRM was small group teaching, I got to do this for all the sessions, and attendance stayed good at around 80%+ for the whole semester. And that year, they <i>destroyed</i> my exam. Only two fails, and a huge proportion of great scores. </span></div><div><span style="font-family: arial;"><br /></span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisa0i0xAFtaQeWHnhqrWMh-w0J-wdcE4rUjjgQnp3QHdp7ERk2scHUyC1zNRcJbFZwN9F6FpdpLl0URgU4_AOKlHjf3qWxzSubWBbBMh6zOcde3LtTVFaC0ce5qRHkBLgFsoKG5v4uS8E/s976/FRMGrades.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="637" data-original-width="976" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisa0i0xAFtaQeWHnhqrWMh-w0J-wdcE4rUjjgQnp3QHdp7ERk2scHUyC1zNRcJbFZwN9F6FpdpLl0URgU4_AOKlHjf3qWxzSubWBbBMh6zOcde3LtTVFaC0ce5qRHkBLgFsoKG5v4uS8E/s320/FRMGrades.jpg" width="320" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Figure 2. The insanely skewed grade distribution for FRM</span></td></tr></tbody></table><h3 style="text-align: left;"><span style="font-family: arial;">Reflections on Teaching This Way</span></h3><div><span style="font-family: arial;">This is a constraints-based approach to teaching students about a topic most people would think was about knowledge. I didn't teach a lot of knowledge explicitly; instead, every week, I tried to create a constrained space within which the students could explore and develop their own knowledge. A lot of that was more knowing-how rather than knowing-that, as well - knowing how to look in the book, knowing how to narrow that search according to the design, etc. You know, the way we senior academics actually do our statistics. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">They ended up with quite a bit of knowledge, though - at least, they were able to crush my exam which asked them about all the content I wanted them to come away from the module knowing. They had to show up every week, though, and although I don't have this data I'd expect grades to correlate to attendance pretty strongly because they always do. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Attendance was crucial for another reason - to ensure everyone had people to work with. Before this cohort, some sessions had been full and those ran well, while others had fewer than 10 people in them and they ran much less well. There just weren't enough people to form groups with a range of skills, and those sessions had much less 'groupiness' as a result, which hurt even the good students in those sessions. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Constraints are hard. Instead of locking down what people should know at the end of the session, you are trying to create a space that encourages people to end up there more by themselves. As any constraints-led coach will tell you, sometimes people self-organise in unexpected ways and you have to reflect on why, redesign the constraints, and iterate. Coaches can often rejig the next training session; I typically didn't find out what hadn't worked until the end, and so as I mentioned, every year there was something that unexpectedly didn't work and I had to rejig the relevant constraints, but not until next year and the next cohort. I had to have a high tolerance for problems and have a clear idea about whether the problem was the delivery format or something else, so that I didn't panic and abandon ship. I had the advantage of theoretical reasons why I was doing what I was doing, but I know for a fact some of my non-ecological colleagues were pretty mystified sometimes. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Relatedly, a real weakness of this module was the fact it lived in an otherwise non-ecological educational context. I always dreamed of completely taking over the Research Methods teaching and doing the whole thing more coherently. </span><span style="font-family: arial;">Let me just reemphasise, I don't think any of my colleagues are doing a bad job - they all care too much for that to be true. I'm just noting that my approach was never built on or developed, and I'd love to know what the looks like.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I would love to adapted the assessment once I'd showed the MCQ didn't work anymore. I never got the chance, and it would have been hard to do so because of how much inertia there is in the UK system. But I would have given them data sets, and asked them to figure out the design, and therefore which analysis to apply, and then run that analysis and interpret it correctly. I was trying to teach them how to engage with data using statistics, and the MCQ was not assessing that. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">One key difference with ecological dynamics is that I spent most of my time creating and maintaining constraints with <b>language</b>, rather than the affordances of a physical space. I spent a lot of time considering what I would and would not say, and trying different ways of verballing constraining enough but not too much. The TAs who taught the course all generally liked the idea, but we all found it hard work not to over-instruct. There is not yet much of a theory about verbally constraining, and there needs to be - the question about the role of verbal instruction is a key point of argument between ecological and non-ecological coaches. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Related to this, the class itself was not especially embodied. Embodiment in education tends to be of the grounded cognition variety, just focusing on activities that involve more of the body. This may or may not matter. However, there's reason to go looking at ways of making the activities less screen-based and complementing the necessary experience with statistics software with other ways of solving the problems. Even if it's just fun, that would be a good thing. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Final Thoughts</span></h3><div><span style="font-family: arial;">This style of teaching came naturally to me, and it made complete sense to teach this way, even for what seems like a knowledge heavy module like research methods. I've thought about how to do this sort of thing in the modules I've taught on cognitive psychology, with varying levels of success. I had a couple of runs of a first year undergraduate module which was coming together but needed more time, and I still teach on the MSc module where my lectures are much more about how to approach a problem like memory the way a cognitive psychologist would, rather than facts about memory. This works pretty well here because the MSc students are generally older, more motivated, and so attendance and engagement are high. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The main lesson I want to draw here is that <b>classroom teaching does not ever have to be about the transmission of knowledge. It can always be about exploring a constrained space in which skill and knowledge emerges from the nature of the exploration.</b> In exactly the same way as ecological dynamics completely changes the job of a coach, this completely changes the role of the lecturer. I personally find the shift to be positive, up to and including a much less authoritarian role for the lecturer. I've become absolutely fascinated by the implications for the classroom of my experience and what I know about ecological dynamics, and I'm really keen to engage with this topic more fully. </span></div><p></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com2tag:blogger.com,1999:blog-9192597712746432631.post-75841568625747444822020-11-24T15:14:00.007+00:002020-11-24T15:17:12.759+00:00An (Draft) Ecological Approach to Hallucinating<p><span style="font-family: arial;">Sabrina and I are planning our next papers, and in typical style she's been thinking about how to tackle a hard problem - this time, hallucinations. These are one of those go-to topics for representational people, because hallucinations by definition are not based in the detection of perceptual information. They are a kind of perceptual experience, however, and so seem to be a good candidate for identifying how perceptual experience is constructed internally. </span></p><p><span style="font-family: arial;">We've never let a little thing like a topic being hard stop us before, so it looks like this is next on our list. The goal is to lay out an ecological analysis and see where we end up. We are going to build on the work we did in the <a href="https://cognitioninaction.files.wordpress.com/2019/10/7.-golonka-wilson-ecological-representations.pdf" target="_blank">Ecological Representations</a> paper, in which we considered how to understand (at least some) neural activity as the selection of consequent neural actions (pg 243 and on). This is the first of a few papers we have in mind where we apply our ecological analyses as worked examples to interesting topics (<a href="http://psychsciencenotes.blogspot.com/2020/11/verbal-instruction-in-sports-coaching.html">verbal instruction in coaching</a> is on my mind too, as are <a href="http://psychsciencenotes.blogspot.com/search/label/The%20Perceptual%20Life%20of%20Cells">cells making blood vessels</a>). </span></p><p><span style="font-family: arial;">In this post, I'm going to do my usual thinking-out-loud about my notes from our first chat; all conclusions are works in progress! </span><span style="font-family: arial;">At this point, I am just assembling the resources our ecological approach provides us, and lining them up in their proper places so we can use them rigorously.</span></p><span><span style="font-family: arial;"><a name='more'></a></span></span><h2 style="text-align: left;"><span style="font-family: arial;">1. <a href="http://psychsciencenotes.blogspot.com/2020/11/verb-your-nouns.html" target="_blank">Verb Your Nouns</a></span></h2><div><span style="font-family: arial;">Taking an ecological approach to a topic that has already been framed representationally means starting from the beginning and reframing the topic. The first step is to <a href="http://psychsciencenotes.blogspot.com/2020/11/verb-your-nouns.html">verb your nouns</a>, because we study verbs. In this context, that means <i>people do not have hallucinations, nor are we studying hallucinations</i>. <b>People hallucinate, and we study hallucinating</b>. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This immediately stops us from asking 'where is the hallucination?', which immediately gets you into representational territory. Instead, we now ask 'how does hallucinating play out?'. What brings it on? What sustains it? What disrupts it? What ends it? What are the contexts in which hallucinating occurs, and what are the effectivities required of a person to hallucinate in those contexts (vs not)?</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">In terms of how hallucinating plays out, one obvious aspect is their perceptual nature. One can hallucinate visually, auditorily, or some combination. Auditory hallucinating is most common, around twice as common as visual. Other modalities are then less common, as far as I know. Smell hallucinations can be a symptom post-stroke, and opioid use can create tactile hallucinations (e.g. ants crawling on your skin; note that this has a physiological mechanism involving release of histamines so it may not count as hallucinating). Phantom limb syndrome might count as kinaesthetic hallucinating, although I don't know that anyone talks about it as such. So one of the sources of constraint on how hallucinating unfolds will be how these perceptual experiences typically play out, and thus how our nervous systems work to support that perception. This in turn is dictated by things such as the nature of the perceptual medium, how densely packed it is, etc. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Importantly, however, hallucinating is not perception. Perception requires information and by definition, hallucinating is what happens when you have an experience that isn't caused by information. However, understanding information is still going to be an important part of this analysis, because the ecological analysis of the nervous system is explicitly grounded in information use. In addition, hallucinating happens in the context of normal perceiving, and so there is a critical role of that perception to address. So we need to take a small detour to lay out some terms that will come in handy later.</span></div><h2 style="text-align: left;"><span style="font-family: arial;">2. Get Your Information Story Right</span></h2><div><span style="font-family: arial;">We have been working within an information framework we developed over several papers (specifically, <a href="https://cognitioninaction.files.wordpress.com/2015/08/golonka-2015-laws-and-conventions-in-language-related-behaviors.pdf" target="_blank">Golonka, 2015</a>; <a href="https://cognitioninaction.files.wordpress.com/2019/10/7.-golonka-wilson-ecological-representations.pdf">Golonka & Wilson, 2019</a>). This framework proceeds as follows.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">All information is specific to the dynamical property that is being lawfully projected into an ambient energy array. However, organisms are not always constrained to use that information as information for that property. This is because organisms have to learn how to use information, and we learn what works. Sometimes we learn to use that information as information for something defined by a sufficiently stable convention, rather than a law. This analysis gives us information, and two classes of use for that information: law-based use of information and the convention-based use of information. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><b>The law-based use of information can support both action selection and action control</b>. For example, optical information specific to the catchability of a fly ball can be used to create the details of the actual catch, or to identify whether the ball is catchable or not. The evidence for the law-based use of information for action control is that the spatiotemporal structure of the information shows up in the spatiotemporal structure of the movement as it unfolds over space and time. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><b>The convention-based use of information can only support action selection</b>. For example, if I say 'pick up the red cup' to an English speaker, they can use the acoustic information in that speech to choose the red cup over other possible targets, but cannot use it to actually perform the reach-to-grasp action. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Hallucinating cannot, by definition, entail the law-based use of information, because there is nothing real to create that information. This means that regardless of how realistic these are, people will not be able to couple their actions to the dynamics of the hallucination. For example, if they are hallucinating something coming at them, they will be unable to time their evasion via time-to-contact information. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Hallucinating also cannot, by definition, entail the convention-based use of information, because again, there is nothing real creating that information. However, we (Golonka & Wilson, 2019) used the possibility of convention-based use of information to ground an ecological approach to 'representation-hungry cognition' by identifying how that use might play out neurally. This is where we need to go next, to explain how a person would be able to <i>select </i>actions while hallucinating; for example, they could choose to duck an incoming object. The details of that duck would effectively be miming, however.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">(Notice here how verbing helps our analysis. To a noun researcher, the fact that someone ducked in response to a hallucination would be evidence of the apparent reality of that hallucination to the person. To a verb researcher, the emphasis on process allows us to predict that ducking while perceiving will proceed differently than ducking while hallucinating. We can measure behaviour and learn about the structure of hallucinating! </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I will also, at this point, just make a note that the information-based perception of the context in which hallucinating happens will be something we will need to get into later. What is the relationship between something you are hallucinating vs something you are perceiving at the same time? Can they influence each other?). </span></div><h2 style="text-align: left;"><span style="font-family: arial;">3. Get the Neural Story Right</span></h2><div><span style="font-family: arial;">Both the law-based and convention-based use of information are supported by the nervous system. Organisms learn to connect information to behaviour, and the nervous system is the sub-system of the perceiving-acting system that changes on the right timescale to support forming/implementing these connections. However, we expect different types of activity from the nervous system in the two cases, with important consequences for the behaviours that activity can support. Golonka & Wilson (2019) worked this out in more detail.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The law-based use of information to control actions requires the nervous system to designate the information to the action systems, and so it should preserve the spatiotemporal structure of the information. This is what <a href="http://psychsciencenotes.blogspot.com/2016/04/brains-learn-to-perceive-not-predict.html">van der Meer et al find when they study how nervous systems respond during the perception of a looming stimulus and the development of the avoidance response</a>. This kind of activity is stable and robust for as long as the information is present and detected (+/- some calibration) but we predict it cannot be generated to this quality in the absence of the information. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The convention-based use of information, in contrast, will not create nor require activity of this kind. The reason is that for this use, the spatiotemporal details of the information are not what's important; rather, it's whether the information is present of absent. This has two consequences. First, neural activity supporting convention-based use of information cannot support action control (which requires access to the spatiotemporal details). More positively, however, at least some of this type of activity should be able to be generated without the information being present, because the details of the information are not the point and because the activity will have shorter timescales. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Most relevantly here, we noted that while there is typically a precipitating external event for this type of neural activity, there could then easily be precipitated neural activity of this type between the perceived information and the resulting action selection. Neural activity might select neural activity as readily as it can select behavioural activity. We also speculated that this kind of neural activity, once sufficiently practiced, could become relatively decoupled from precipitating perceptual events and be used as components for 'representation-hungry' cognition. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The implications for hallucinating are that it is supported by neural activity that can only support action selection. This makes a lot of predictions about what hallucinating can be like and what it cannot be like, and I think there are many ecological questions to ask about how people can and cannot interact with hallucinations. They all require a different kind of testing though, focused on movement kinematic, etc. There is also a lot of existing work on the phenomenology of hallucinating, although of course a lot of it only had access to a non-ecological vocabulary. I think this domain will be a really interesting testing ground to find the limits of our ecological neural representations. </span></div><h2 style="text-align: left;"><span style="font-family: arial;">Interim Thoughts</span></h2><div><span style="font-family: arial;">A few predictions and questions pop out, which I'll list here for reference</span></div><h3 style="text-align: left;"><span style="font-family: arial;">Predictions</span></h3><div><span style="font-family: arial;">We would expect auditory hallucinating to be the most common (which they are) because of the fact auditory information is primarily only used for action selection, vs action control. Auditory looming is a thing, of course, and people have done sensory substitution studies providing auditory information with a more continuous structure. But in general, audition supports action selection and so we are really good at action selection, convention-based activity involving sound and the neural activity that comes with it. We would also expect auditory hallucinating to be more coherent and robust that visual hallucinating, so the same reason. </span></div><h3 style="text-align: left;"><span style="font-family: arial;">Questions</span></h3><div><span style="font-family: arial;">So many questions! Many of these are of course already lively topics; I do think that our ecological framing will change the details of how we ask these, though. </span></div><div><ul style="text-align: left;"><li><span style="font-family: arial;">Is there individual variation in inner speech quality, and does this track with individual variation in auditory hallucinating? What about imagery variation and visual hallucinating?</span></li><li><span style="font-family: arial;">Is hallucinating precipitated by anything? Is it instead abnormal attention directed to default mode style neural activity? </span></li><li><span style="font-family: arial;">How does hallucinating vary across the various 'modalities'? What can and can't you do with respect to various kinds of hallucinations? At what point does the ambiguity that is a hallucination get revealed by it's relationship to the rest of the perceptual array? </span></li><li><span style="font-family: arial;">What about developmental/age differences? Cross-cultural differences? Given the fact it has to all be based in learned, convention based neural activity, there must be intriguing differences. </span></li></ul><div><span style="font-family: arial;">The next thing to is to do a bunch of reading of the relevant literatures, scoping out what's known so far.</span></div></div>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com3tag:blogger.com,1999:blog-9192597712746432631.post-55615706120936270912020-11-16T10:16:00.001+00:002020-11-16T10:16:47.931+00:00Verb Your Nouns<div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial; margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="254" data-original-width="800" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_ol7_GiqLnjzttIrqUub30g4Z0F3cbtBHXduFzdwi2MXKfVmN7zLLa2Mu33zAwLJMS2nZJbUX9nvJSzsFvT39APHKXChMcTE1OoRQuESPhWObPmXGb5GcLXC226mrAhdtAuVaaEEmtUg/w400-h127/Verbing+Weirds+Language.gif" width="400" /></span></div><p><span style="font-family: arial;">One of the things that makes it hard to communicate with people about the ecological approach is that it is actually a radically different way of thinking about cognition. That means we are often literally not talking about the same things (this is why Hobbes is so concerned).</span></p><p><span style="font-family: arial;">One of the ways this shows itself over and over is <b>nouns </b>vs <b>verbs</b>. Information-processing cognitive science studies nouns; <i>memory</i>, <i>attention</i>, <i>perception</i>. Describing things with nouns drives your science in very particular directions. For example, as soon as you talk about memory as a thing, you naturally ask questions such as 'where is it?' and 'how do you access it?'. </span></p><p><span style="font-family: arial;">In contrast, the ecological approach studies verbs; <i>remembering</i>, <i>attending</i>, <i>perceiving</i>. Verbs also guide your science in particular directions, but instead of talking about things that must be somewhere, we talk about processes that happen at times, in places.</span></p><p><span style="font-family: arial;">Ecological types are often challenged to explain nouns cognitive psychologists are interested in. Memory is the big one; language is the other. I've realised, we can't, but only because we don't study nouns. So whenever I encounter this challenge now, my very first move is to <b>verb all the nouns</b>. While this doesn't provide an answer, it immediately makes me ask ecological style questions about the topic at hand, and now it can help you too.</span></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com3tag:blogger.com,1999:blog-9192597712746432631.post-38067643956432851882020-11-09T13:42:00.001+00:002020-11-09T13:42:20.945+00:00Turvey, Lectures on Perception: The Story So Far<p><span style="font-family: arial;">I have now reviewed the first 7 Lectures on Perception from Turvey' textbook (<a href="http://psychsciencenotes.blogspot.com/2020/04/reading-group-turvey-2019-lectures-on.html">posts indexed here</a>). I feel like I've reached a natural pause point before carrying on with the rest of the lectures, and I wanted to summarise what I thought has been going on so far.<span></span></span></p><span style="font-family: arial;"><span></span></span><p><span style="font-family: arial;">These are lectures on perception, quite generally. While Turvey will explain the ecological approach, that's not what is currently going on. Right now Turvey is laying out the scope of the question '<b>what kind of material system could be a perceiving system</b>?' and reviewing the typical, non-ecological answers to that question.</span></p><p><span style="font-family: arial;">The main takeaway at this point is that we have been trying to answer that question using 17th century notions of material systems, specifically Newtonian mechanics. This has placed what turn out to be unacceptable restrictions on the kinds of properties we are allowed to invoke, and theories of perception have had to fill the gaps with unpayable 'loans of intelligence'. 21st century physics, however, has more on offer; complexity science and quantum mechanics teach us many valuable lessons on how to expand our horizons so that perception becomes possible, rather than miraculous. </span></p><p><span style="font-family: arial;"><span></span></span></p><a name='more'></a><span style="font-family: arial;">In <a href="http://psychsciencenotes.blogspot.com/2020/04/lecture-1-what-kind-of-systems-do-we.html">Lecture 1</a>, he establishes that a perceiving system must be understood as <i>epistemic</i>, <i>intentional</i>, and as a <i>system</i>. Systems are collections of parts that depend in some way on each other for how they function. Intentional systems are about something, and epistemic systems know about something. This lays out the question for a science of perception: <i>what kind of material system can come to know about things</i>?</span><p></p><p><span style="font-family: arial;">To answer this, we need to think about what sort of properties material systems are allowed to have. Turvey introduces <a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-5-mechanistic-hypothesis-turvey.html">Newtonian mechanics as the science of physics</a> we tend work within, and the <a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-6-cartesian-program-turvey-2019.html">Cartesian property dualism</a> that comes out doing that. These are sets of assumptions about what the legitimate elements of a science of perception has to work with, and they lead to the mess that is <i><a href="https://psychsciencenotes.blogspot.com/2020/04/lecture-3-direct-perceiving-indirect.html">perception as indirect</a>, specifically a three term relation;</i></span></p><p></p><ul style="text-align: left;"><li><span style="font-family: arial;">The <a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-5-mechanistic-hypothesis-turvey.html">Newtonian notion of <i>local causation</i></a> implies a linear chain of events, one thing causing the other over time. For perception, this implies some kind of copy/simulation of the world being carried by (for example) light and projected to an observer, which causes perceptual experience (<a href="http://psychsciencenotes.blogspot.com/2020/06/lecture-4-simulative-projective-and.html">Lecture 4</a>).</span></li><li><span style="font-family: arial;">All efforts to identify the nature of the simulation and therefore what is projected leads to an assumption of <i>poverty of stimulus</i></span></li><li><span style="font-family: arial;">Given this, there must be at least one thing standing between the world and perceptual experience of the world; perception is indirect, involving the world, the perceiver, and at least one intermediate entity</span></li><li><span style="font-family: arial;">These always and inevitably require <a href="http://psychsciencenotes.blogspot.com/2020/10/lecture-6-cartesian-program-turvey-2019.html">unpayable 'loans of intelligence'</a> to make work</span></li><li><span style="font-family: arial;"><a href="https://psychsciencenotes.blogspot.com/2020/10/lecture-7-empiricism-and-man-in-inner.html">Even empiricists</a> trying to avoid the problem fall foul of it, because they remained committed to Newtonian mechanics and local causation</span></li></ul><div><span style="font-family: arial;">Throughout these Lectures, Turvey spends a lot of time illustrating that physics has come a long way since Newton, and that the legitimate set of physical properties a perceptual system might be built from is much wider and more interesting. Specifically, quantum physics has revealed one domain in which properties can be <i><a href="https://psychsciencenotes.blogspot.com/2020/04/lecture-3-direct-perceiving-indirect.html">impredicative</a></i>, that is, defined with respect to the system in which they are currently playing a part. These properties can then take part in <i>non-local causation. </i>Turvey proposes that if we can develop an ecological scale physics that can include impredicative properties and therefore non-local causation, we enable the possibility of <i><a href="https://psychsciencenotes.blogspot.com/2020/04/lecture-3-direct-perceiving-indirect.html">perception as direct</a>, specifically a two term relation</i> (world and observer). </span></div><p></p><p></p>Andrewhttp://www.blogger.com/profile/16732977871048876430noreply@blogger.com0