Pages

Tuesday, 7 March 2023

What Science Has to Offer the World of Coaching

It started, as these things always do, on Twitter. Someone posted a training drill they were excited about (kids kicking a ball against a wall) and I made a comment to suggest I didn't think this was the most football-useful activity I had ever seen. That's all I intended to say, but enough people (coaches, mainly) got mad at me that more was said, and it quickly devolved into the standard entrenched lines this argument lives between.

There were two related themes to the replies. The first was essentially 'how dare you'; apparently questioning a coach's practice crosses a line (I admit I had been a little snarky, but only a little, I promise!). I find this response quite hard to understand: if you share your practice then it seems fair to expect not everyone will love it, and some may have sound reasons to think it's a bad idea, and I think that should be ok. 

The second theme was then 'what makes you think you can comment?'. This one mostly showed up in the form of demanding to know how much coaching I had done, a question designed to get me to admit 'none' and allow them to simply dismiss my view as that of an ill-informed outsider. This one annoyed me more than the rest, so I wanted to explain how and why I, an academic non-coach, gets to be a part of the conversation around training activities. 

First, let's try to set a positive tone. I value the experience and understanding that coaches have by virtue of their practice. Of course this expertise places you in a key position in the conversation. All I want to develop here is some reasons why other people, including people like me, also get to take part in that conversation, because we have relevant expertise and things to contribute. So let's talk about what science and scientists have to offer to the discussion about training environments.

1. What is my expertise?

I am not a coach, so what do I (and the many people like me) know? Well, personally, I have been doing lab research on skilled action, learning, and transfer of learning for 20 years or so (I got my PhD in 2005). I have spent those years studying the perception-action mechanisms of skilled action; I've developed methods and measures for probing those systems and uncover what they are made of, and I have many peer-reviewed papers about those experiments and what they teach us about those mechanisms. In addition, I have spent a huge amount of time becoming an expert in an underlying theory of these perception-action mechanisms, specifically the ecological approach. I have written extensively about this theory, developing it to new topics, and applying it as rigorously as I can to the empirical study of these mechanisms. Academia is filled with people with similar journeys and expertise.

Why is any of this relevant? Because sports coaching is an attempt to shape and guide the learning of physical skills (to make that process work as well as possible) and also, centrally, an attempt to shape the transfer of this learning, from the training environment to the game environment. No one trains people by simply placing them in the full game environment, so coaching is about designing contexts in which people can focus on some fundamental aspect of the game, and a good training session is therefore one that promotes the transfer of that more restricted activity to the full scale game. The kind of work I do is about exactly this question, so it seems like there's an obvious contribution here; let's talk about some of that now. 

2. The Question of Transfer

Everyone engaged with the question of learning and transfer (coaches and researchers alike) all agree on one thing: learning is predicted to transfer to the extent that the learning context and the transfer context overlap in some meaningful way. Remember that a training context is a reduced form of the game day context, so for learning in the former to transfer to the latter, the training context has to contain at least one of the real parts of the game day context; this is how they need to overlap. In order to design a sensible training context, you therefore need some reason to decompose the game day context into a particular set of parts - you need a theory of what parts make up the game day context. 

Coaches often rebel at the word theory, but like it or not they all have one about what counts as a real part of the game day context. If you prefer, another way to think about theory here is just to point out all coaches have a set of organising principles that make them think one way of doing things is better than another; they have reasons for training this rather than that. You are not, I am assuming, simple doing things at random. Instead, you are targeting 'the fundamentals' - what your theory/organising principles tell you are the real parts of game day performance. 

The science of what skills are, how they are learned, and how they transfer is immediately relevant to the design of good sessions, because it is all trying to provide an evidence base about what a 'fundamental aspect of a task' actually is; about what counts as 'meaningful overlap'. So what does the science say?

3. The Bad News

The main thing science has learned is what we don't know. Scientists have been decomposing tasks into parts and investigating transfer since Thorndike & Woodworth (1901), and as a general rule the results have been uninspiring, to say the least. This was reviewed in a paper for the Army by Schmidt & Young (1986) who identified that a) it's very hard to identify what a task is and when it has changed enough to become a different task, and b) that most research shows that even tasks that seem very similar show little transfer between them and performance doesn't even correlate much between them, suggesting that aren't, in fact, very similar. 

The key here is when I said 'tasks that seem very similar'. It turns out that since 1901 researchers have been conceptualising tasks according to common sense intuitions, rather than any key set of principles. Two tasks are considered similar because it seems to make common sense that they would both require the same process in order to work. But time and again this basic idea has been shown not to work; since 1901 there have been a chain of 'boom-then-bust' cycles as people try to characterise tasks, test their particular common sense approach, find it fails, give up, only for it all to repeat 20 years later. A simple example is the idea that balancing on two different surfaces (say, a beam vs a slack line) surely both entail 'balance'; but the data very clearly says otherwise (e.g. Serrien et al, 2017). There is no transfer of learning between such tasks, which suggests that 'balance' might not be a single piece of the various skills; it might not be a 'fundamental'. Even simple changes (like increase the RPM in a pursuit rotor task) can restrict learning at one speed transferring to the other; the transfer and savings may not be zero, but it is never as good as it 'should' be, given our intuitions about task similarity.

This all matters because how coaches decompose tasks so as to target training is all informed by the same intuitive understanding of what those tasks are made of (even if a coach is reading the research, remember that research is based on the intuitions too). You see it in the way they talk; kicking a ball against a wall has value because 'kicking' is obviously a fundamental part of football; the 'obviously' is always there in the discussion. But the point is this: the data so far actually shows that we don't have a firm footing for that 'obviously' yet, not even firm enough to agree 'kicking is fundamental to football'. 

This seems like a strike against science; remember, I was saying that one of our key contributions is to provide an evidence base about what tasks like full games are made of.  A coach might therefore say 'I have been drilling fundamental skills for 20 years and my players have improved', or 'if kicking against a wall was good enough for Dennis Bergkamp it's good enough for me'. Maybe coaching practice is revealing the true story and science is just failing because we're locked away studying dinky lab tasks. 

The reply is actually simple: the science is still telling us something important, even though it hasn't solved the problem yet. Science has carefully tested these questions under more controlled circumstances and shown that the 'obvious fundamentals' identified by our intuitions aren't generally the right ones. That's an important thing to know, because it suggests that whatever it was that helped your players improve, it probably wasn't the isolated drills targeting things you think are obviously fundamentals. But sports coaching entails a lot of other activities besides isolated drills and coaches point to these as well. Scientifically, the failure to find those 'obvious fundamentals' yet just means it's time to look at all the other things you can do in a training session, and it's important to do so carefully so you don't invoke mere anecdotes riddled with survivor bias as evidence for your practice (science help not just with the data but also brings rigour to the data collection process). 

4. The Better News

So far we've ended up in a fairly grumpy place that's going to bug a lot of people, so let me try to reformulate this a bit more optimistically:
  • Most people who go through coaching come out better players
  • A few of these people come out as great players
So what happened during that coaching has clearly done something useful. The question is what - which bits were the most useful bits?
  • Science has been trying to figure this out for 120 years and has learned that most of the ideas about which bits are the important bits don't work, at least not well
  • Most coaching is guided by the same ideas tested by science, either because the coach shares the intuitions or because they got them from the research, so whatever it is they are doing that's helping, the evidence suggests its not what they think they're doing
Here's the optimistic bit: what we have here is not a disaster, but a challenge. Coaches don't like being challenged (or so I've learned on Twitter - this drove the 'how dare you' type replies), but scientists love it - a puzzle yet to be solved is what we live for. So this is one key thing I think science has to offer sports coaching: let us rise to the challenge for you! If you have something you think is 'obviously a fundamental', let us actually test it for you, see what happens! Work with us to provide us with the questions we can develop the evidence for and against, so that when you say 'well obviously X is fundamental' you will actually have clear reasons you can point to to back yourself up. All we ask is that you listen to the answers, even (or perhaps, especially) when they don't back you up. 

We also have a lot of new ideas about what counts as a fundamental. We have 120 years of data showing us this way and that way of carving tasks up doesn't work, and one response to these kinds of failures has been to work to find other ways of carving up tasks and testing those. Currently, the main other way is based in ecological psychology. The unit of analysis here is not the organism, but the organism-environment system, and so (we propose) whatever the fundamentals of a task are, they will live at that scale, and not at the scale of the individual player; not 'balance', but 'balancing-on-a-beam', that sort of thing. This implies that training should happen in representative organism-environment systems, hence the worries about kicking balls at walls, etc. This will either work or it won't, but I'll defend this sort of thing elsewhere; at this point I'm just trying to argue that science in general has a lot to contribute and part of that is candidate new ways to carve tasks up, to replace the ways we know don't work well.

Summary

We're all just trying to engage with the process of learning and transfer in good faith. Scientists, because we want to know how the damn thing actually works; coaches, because you want to do right by your players and guide them well. We are all in this together, and we all have a role to play, so let's start the discussion by accepting that.  

So what is the role of science? To ask good questions carefully, and generate data that speaks to that actual question. Doing this has led to a frustrating place: we know a lot about what doesn't work. This counts as useful information, though! Doing this has also led to a very different place: the whole point of ecological dynamics, for example, is that it summarises a more promising way to conceptualise what tasks are and how to train real parts of them. But however this discussion pans out, the point of this post is to advocate for science having a meaningful role to play in the discussion of what counts as good coaching. 

1 comment:

  1. Thank you for signposting to this - I have observed the 'twitter debate' with interest, this is a fascinating area of exploration and debate - would be keen to learn more and hear your deeper thinking...thank you

    ReplyDelete