Tuesday 2 February 2021

The Constraints-Based Approach to Teaching in the Classroom

If you read this and think 'hey, this sounds like something they do in [insert teaching method here]', please let me know. I've had some chats about the Montessori method, and there's certainly overlap there. But I'm on the hunt for a literature I can connect to, and any help would be appreciated.

I've been thinking a lot about education lately. I'm home-schooling the kids, I've been chatting a lot with coaches about ecological approaches to their teaching (most publicly here and here), and I'm reading Tim Ingold's Anthropology And/As Education. I'm also wondering why the demonstrated success of the ecological dynamics approach in sports pedagogy has had zero consequences for education more broadly. 

I think a couple of things. I think the reason why ecological dynamics hasn't spilled over is that we live in a dualist world where knowledge and physical skills are two distinct domains (think about how physical education is treated in schools). I also think that because the ecological approach doesn't endorse that dualism, there is simply no reason for classroom education to work completely differently from physical education. And finally, I think this might be really, really important.

I used to teach a module called Foundation Research Methods, and after a while I finally realised that I was teaching it in a constraints-based, ecological dynamics style. (This explains why a lot of my colleagues were genuinely confused by what I was doing at times, I think!). The module developed over the years, and the last year I taught it we solved our attendance problem and the students crushed the exam

I want to walk through what I did, and reflect on how it embodied an ecological approach. This is not me saying this is how all classes should be taught. This is just me laying out what a constraints-based approach looked like in the class, what I thought worked, and what I would like to have done next.  

The Class

The module was called Foundation Research Methods. It was a first semester, first year module for Psychology undergraduates, and it led them through the basics of study design and data analysis (from means and standard deviations to one-way ANOVA). My goal was to take 180-200 students with a mix of experience in statistical thinking and get them all to the point where they could all be on the same page and build on their skills in the follow-up Intermediate and Advanced Research Methods modules. 

Like most psychology statistics classes, this had originally been taught with hour long lectures for the whole group, separate workshops to practice SPSS, and practical sessions on study design and research ethics. I actually inherited the design I'll tell you about from Sabrina, who rebuilt the module before a maternity leave. I took the module over and ran it for 5 years. 
Figure 1. The organisation of FRM in 2018
The new design worked as follows, over 12 weeks:
  • Week 1 I gave them all an introductory lecture where I explained the module and the way it would run
  • Every week there were 5 or 6 identical 90 minute sessions for 30 students per session. I usually taught 3 or 4 of these, with a GTA running the remainder. Sessions took place in a computer lab.
  • The first 30 minutes or so was taken up by me giving a mini-lecture in which I set the scene for the day's activities. I'd introduce the topic, and provide information and context about why we were doing this and how it fit into the developing module. 
  • The next 60 minutes had the students work on a task based on the day's topic that had them using SPSS (and eventually JASP as well) and data sets. 
  • There were also 4 weeks where there were multiple small group practical sessions, where I had them draft and mark components of lab reports and engage with research ethics basics. 
  • Assessment varied over the years; there was generally an MCQ to assess basic knowledge, there was sometimes a lab report based on an experiment they had taken part in, and sometimes there was a reflective report where they took part in studies for credit and wrote about what the study design was like to experience as a participant. 

General Principles of Delivery

Each week built on the previous weeks. Week 1 covered measures of central tendency, then Week 2 did that again and added standard deviations, then Week 3 did those plus various kinds of variables, and so on. The workshop activities embodied this structure too - every week the activity would have them do everything, in order, that they had already covered, and then added that week's new thing. 

The lectures did not ever tell them how to, say, run a t-test. Instead, I would explain what t-tests were for, the nature of the question that they could answer, and how they fit into the general statistical framework I was building over the module. The workshop activities also never told them how to run the t-test. Instead, activities included finding where in the textbook (Andy Field's Discovering Statistics Using SPSS) they could find step-by-step instructions, and linking to YouTube walkthroughs on running the test in SPSS. 

A GTA and I would wander the lab, answering questions and keeping the students on the right path. Again, we would never tell them how to run SPSS. Instead, we would ask them questions such as 'did you find the page in the textbook yet?' and if not, help them with that. If they had questions about vocabulary that was just blocking their progress we'd help directly, but generally our job was to keep them constrained to working within the range of the day's activity. 

I also actively encouraged the students to work together in groups. If one had solved a problem, I would often direct a student with that question to the other student. I'd encourage them to share resources, on the premise that success in the module was not a zero-sum game. 

How It Went

The students were always a bit nervous at the start, but this was more 'new psychology undergrads scared of statistics' than 'confused by the module format'. I had the advantage that I was one of their first modules, so I got to set the rules and didn't have to fight against experience with other university-level modules. 

Every year, something didn't work, but every year, I fixed that problem and it never came back. These problems were mostly student experience problems, rather than content problems. One example was that I saw their progress, but they couldn't. Every week they experienced running into something they didn't know how to do, and as a result every week it felt like they weren't getting anywhere. From my perspective, though, I could see their progress - every week the class would be quiet as they cracked on and did all the parts of the activity that was material from previous weeks, and they would only start talking and asking questions when they hit the new thing. After I realised what their first-person experience was, every year following I made a point to explicitly draw their attention a few times to their progress, and that problem never repeated. 

Most years, the MCQ performed just fine; a nice normally distributed grade curve centred on 58-62% (in the UK this is the boundary between a 2:2 and a 2:1, so it's a sensible mean). My final year teaching this, though, we solved our attendance problem. The course team as a whole had been working on this for a while, and nothing had worked until the year we took an attendance register for all small teaching sessions. I literally called out names and marked people present or absent, like school. Because FRM was small group teaching, I got to do this for all the sessions, and attendance stayed good at around 80%+ for the whole semester. And that year, they destroyed my exam. Only two fails, and a huge proportion of great scores. 

Figure 2. The insanely skewed grade distribution for FRM

Reflections on Teaching This Way

This is a constraints-based approach to teaching students about a topic most people would think was about knowledge. I didn't teach a lot of knowledge explicitly; instead, every week, I tried to create a constrained space within which the students could explore and develop their own knowledge. A lot of that was more knowing-how rather than knowing-that, as well - knowing how to look in the book, knowing how to narrow that search according to the design, etc. You know, the way we senior academics actually do our statistics. 

They ended up with quite a bit of knowledge, though - at least, they were able to crush my exam which asked them about all the content I wanted them to come away from the module knowing. They had to show up every week, though, and although I don't have this data I'd expect grades to correlate to attendance pretty strongly because they always do. 

Attendance was crucial for another reason - to ensure everyone had people to work with. Before this cohort, some sessions had been full and those ran well, while others had fewer than 10 people in them and they ran much less well. There just weren't enough people to form groups with a range of skills, and those sessions had much less 'groupiness' as a result, which hurt even the good students in those sessions. 

Constraints are hard. Instead of locking down what people should know at the end of the session, you are trying to create a space that encourages people to end up there more by themselves. As any constraints-led coach will tell you, sometimes people self-organise in unexpected ways and you have to reflect on why, redesign the constraints, and iterate. Coaches can often rejig the next training session; I typically didn't find out what hadn't worked until the end, and so as I mentioned, every year there was something that unexpectedly didn't work and I had to rejig the relevant constraints, but not until next year and the next cohort. I had to have a high tolerance for problems and have a clear idea about whether the problem was the delivery format or something else, so that I didn't panic and abandon ship. I had the advantage of theoretical reasons why I was doing what I was doing, but I know for a fact some of my non-ecological colleagues were pretty mystified sometimes. 

Relatedly, a real weakness of this module was the fact it lived in an otherwise non-ecological educational context. I always dreamed of completely taking over the Research Methods teaching and doing the whole thing more coherently. Let me just reemphasise, I don't think any of my colleagues are doing a bad job - they all care too much for that to be true. I'm just noting that my approach was never built on or developed, and I'd love to know what the looks like.

I would love to adapted the assessment once I'd showed the MCQ didn't work anymore. I never got the chance, and it would have been hard to do so because of how much inertia there is in the UK system. But I would have given them data sets, and asked them to figure out the design, and therefore which analysis to apply, and then run that analysis and interpret it correctly. I was trying to teach them how to engage with data using statistics, and the MCQ was not assessing that. 

One key difference with ecological dynamics is that I spent most of my time creating and maintaining constraints with language, rather than the affordances of a physical space. I spent a lot of time considering what I would and would not say, and trying different ways of verballing constraining enough but not too much. The TAs who taught the course all generally liked the idea, but we all found it hard work not to over-instruct. There is not yet much of a theory about verbally constraining, and there needs to be - the question about the role of verbal instruction is a key point of argument between ecological and non-ecological coaches. 

Related to this, the class itself was not especially embodied. Embodiment in education tends to be of the grounded cognition variety, just focusing on activities that involve more of the body. This may or may not matter. However, there's reason to go looking at ways of making the activities less screen-based and complementing the necessary experience with statistics software with other ways of solving the problems. Even if it's just fun, that would be a good thing. 

Final Thoughts

This style of teaching came naturally to me, and it made complete sense to teach this way, even for what seems like a knowledge heavy module like research methods. I've thought about how to do this sort of thing in the modules I've taught on cognitive psychology, with varying levels of success. I had a couple of runs of a first year undergraduate module which was coming together but needed more time, and I still teach on the MSc module where my lectures are much more about how to approach a problem like memory the way a cognitive psychologist would, rather than facts about memory. This works pretty well here because the MSc students are generally older, more motivated, and so attendance and engagement are high. 

The main lesson I want to draw here is that classroom teaching does not ever have to be about the transmission of knowledge. It can always be about exploring a constrained space in which skill and knowledge emerges from the nature of the exploration. In exactly the same way as ecological dynamics completely changes the job of a coach, this completely changes the role of the lecturer. I personally find the shift to be positive, up to and including a much less authoritarian role for the lecturer. I've become absolutely fascinated by the implications for the classroom of my experience and what I know about ecological dynamics, and I'm really keen to engage with this topic more fully. 

2 comments:

  1. I like conceptualising this approach as 'constraint based' but there are many parallels coming from different traditions that do something similar.

    Problem-based learning is based around recasting the task of the learner in a similar fashion.

    Within the classroom there's Eric Mazur's peer instruction and Sugata Mitra's hole in the wall experiments.

    Deliberate practice is another approach that essentially talks about the same issues. Ericsson uses the term mental representations but they are very much non-computational for him - would be better called chunks.

    I recently wrote up a survey of some key papers in this area: https://altc.alt.ac.uk/blog/2020/07/6-papers-on-education-to-read-this-summer-to-prepare-for-blended-teaching-and-learning-ideas-for-a-journal-club/.

    But similar non-normative insights can also be found in ethnography of education. Becker's 'Boys in White' and Nathan's 'My Freshman Year' show perfectly how students organise their efforts around the institutional constraints - in a way revealing what those are by their actions.

    I tried to think about it at that level from a very broadly ecological perspective here: https://metaphorhacker.net/2020/06/no-back-row-no-corridor-metaphors-for-online-teaching-and-learning.

    But I think the key issue you are not addressing here is the role of conceptalisation and mental representations. Because building and rebuilding concepts in their minds is what the students are doing all the way through. Or rather that's what it feels like to them they're doing. And talking just about the constraints leaves something important out - pushes the hard problem up stream. Now, I am not making any claims as to what those mental representations look like. They are certainly not in any way computational. But something is going on that is happening at that level.

    In many contexts, the distinction between procedural and conceptual knowledge is made - and while even the conceptual knowledge has a procedural component, the sort of learning required is fairly different. ANOVA, I would say, is more procedural but it also has a strong conceptual component. This lends itself well to this sort of approach. I wonder how well it would work say in 'international relations' - I think it could but it would be interesting to explore the differences. But also it would look very different in a much more heavily procedural context like foreign language instruction.

    ReplyDelete
  2. For what it's worth, I think the same insights could be applied to religious instructional settings as well. What I mean is that it seems to me that most Christian churches focus on knowledge transmission (such as listening to sermons), and where action is required (such as singing worship songs), that action is not explorative so much as it is supposed to be habit forming and repetitive. Nevertheless, from my understanding of the relevant theology, Christians are supposed to do "good works". It would be fascinating to see what Christians might be like if their worship services consisted of "exploring a constrained space in which skill and knowledge emerges from the nature of the exploration."

    ReplyDelete