3 min read

Wicked learning of critical thinking

One of the key skills for interdisciplinary scientists is critical thinking. That may sound obvious, because every scientist needs to be good at critical thinking and okay, fair point. Interdisciplinarians, however, run into some specific problems that require well-developed critical thinking skills.

  • They work at the boundaries of their area of expertise and so have to remain aware of the scope of their root discipline, as parts of the problems they are examining may simply be out of bounds for their expertise. This means they need to think critically about the explanatory power of their field.
  • Interdisciplinarians have to engage with the theoretical and empirical work done by researchers from outside their area of expertise. This means they need critically look at sometimes completely novel sources of information.
  • To make the step from multidisciplinarity to interdisciplinarity, interdisciplinarians need to weigh different sources of evidence. This is not trivial: it is difficult enough to rationally weigh conflicting findings within psychology, but once sociology research says yet another thing entirely, high-level critical thinking is needed to reconcile conflicting ideas.

This means that teaching interdisciplinarity may require teaching critical thinking, too. However, there has been fierce debate throughout the decades on how critical thinking can be taught, assuming it can be taught in the first place. Those who believe it is a skill that can be trained roughly fall in one of two camps: the deliberate practice camp, which emphasizes regular, explicit training and theory to improve critical thinking skills and the implicit learning camp, which states that if you teach a subject in higher education, students develop critical thinking skills on the side, whether they have been studying philosophy, biology, history or physics.

I was reminded of this debate when reading Range: Why Generalists Triumph in a Specialized World. In this book, the author David Epstein explores the thorny subject of performance and makes the point that having a wide range of experiences can boost performance, as well as improve the generalizability of skills. Deliberative but narrow practice can train people to perform well procedurally, but may not prepare them well for novel circumstances.

Now deliberative practice is not by definition a narrow practice, but for critical thinking it often has been. Classical interventions — think mandatory logic classes, chess as a course or curricular space for mathematics or epistemology — all relied on the intuition that training one thing that required critical thought would boost critical thinking as a general skills. These interventions have to my knowledge, largely failed to boost scores on standardized critical thinking tests, with the notable exception of having student do lots of argumentative mapping (LAMP).

Could it be that the best way to teach critical thinking is neither a targeted intervention of deliberate practice or accidental teaching via domain-specific teaching, but rather a varied set of deliberately taught critical thinking forms? Epstein argues not only that this the case for skills that require generalizability or flexibility, but also that the varied set of forms need to reflect a ‘wicked learning environment’, in which procedures and recipes are of little use and in which a learner experiences friction, if not outright struggle while heading towards a resolution.

In higher education, however, creating struggle is difficult. Students generally clamor for recipes and flowcharts. Quality control mechanisms often optimize at the course level and thereby neglect the longer view. Epstein mentions a pretty dramatic illustration of this through a longitudinal study on calculus training in a military academy, where randomization and control groups are relatively easy to set up. Epstein writes:

Unsurprisingly, there was a group of Calculus I professors whose instruction most strongly boosted student performance on the Calculus I exam, and who got sterling student evaluation ratings. Another group of professors consistently added less to student performance on the exam, and students judged them more harshly in evaluations. But when the economists looked at another, longer-term measure of teacher value added—how those students did on subsequent math and engineering courses that required Calculus I as a prerequisite—the results were stunning. The Calculus I teachers who were the best at promoting student overachievement in their own class were somehow not great for their students in the long run. “Professors who excel at promoting contemporaneous student achievement,” the economists wrote, “on average, harm the subsequent performance of their students in more advanced classes.” What looked like a head start evaporated. The economists suggested that the professors who caused short-term struggle but long-term gains were facilitating “deep learning” by making connections. They “broaden the curriculum and produce students with a deeper understanding of the material.” It also made their courses more difficult and frustrating, as evidenced by both the students’ lower Calculus I exam scores and their harsher evaluations of their instructors.

The way out of this may be as simple as not taking course evaluations too seriously and devising new instruments that instead measure long-term progress, but I am not sure how long an educational programme could get away with that. Instead, Epstein’s argument may be setting the bar a lot higher for educational designers: following him, a programme that wants to teach critical thinking should create a wicked learning environment for it that students buy into — either because their increasing mastery of critical thinking is visible or because they find it convincing that learning is happening, even if they may not be seeing it.