Desirable difficulty and undesirable agents
In his newsletter on effective learning, Paul Kirschner discusses reduced feedback, a 'desirable difficulty' proposed by Robert and Elizabeth Bjork. If a teacher decreases feedback to a learner, this can be beneficial to learning:
The Bjorks’ argument begins with a simple observation: immediate and constant feedback improves performance during practice, but can sabotage learning by creating a what can be called a fluency trap. Practice becomes smooth, and smoothness is mistaken for mastery. The lesson ‘works’ because the student keeps getting steered back onto the right track before they have a chance to build their own internal map.
Such fluency traps can also be found if GenAI is employed during a learning process. Think of a student learning how to code in Python, with access to an LLM. Anytime he gets stuck, he prompts the machine and receives a well-structured, clear explanation. This feels like learning and it feels like progress, but the smoother experience may make for poorer learning. Then, if assessment takes place without LLM access, the student may be very disappointed in their own lack of mastery. From my own environment, I am picking up signals that exactly this is happening – LLM use is reducing friction too much.
Kirschner's description of reduced feedback also offers a solution to this problem.
Over time you give feedback less frequently, but you also make it less detailed and more general. Early on, you might point to the exact place where the reasoning breaks and name the rule that was violated. Later, you might simply indicate that there is an error in a particular line and ask the student to find it. Later still, you might offer only a general cue: ‘Check your units,’ ‘Look again at the relationship between these two ideas,’ ‘Does your conclusion follow from your evidence?’ Notice what this does. The feedback gradually stops being an instruction manual and becomes a prompt for self-regulation; it becomes more epistemic and less corrective/directive.
The way I translate this is that if you decide to use LLMs in learning, you also need a plan to wean students off their use. Read Kirschner's full post here, it's valuable advice in general.
Relatedly, David Epstein recently made a video about desirable difficulty. He, too, warns against GenAI removing the friction from learning. The video offers a good reminder of what we should consider while rethinking education.
Last month saw the publication of a paper written by 22 authors, including Gary Marcus and Sander van der Linden, on the risks of AI swarms taking over (I guess van der Linden would say 'infecting') public discourse. Marcus, Daniel Thilo and Jonas Kunst wrote an essay on it here.

While I still don't buy the misinformation scare, the notion of swarms is a big concern even to me. After all, if you consider thinking to be a social affair and if you believe that's the case because groups of humans just think better together, then the idea of these groups being diluted and polluted by counterfeit people with an agenda is concerning to say the least.
Online discourse is a source of social proof and this is already leveraged by marketing agencies seeking to peddle goods (especially to adolescents and the elderly, who I think both put a premium on social proof). Algorithmically generating such proof is quite the scenario. My gut response to this is to promote physical forums for discussion, third places where real people can leverage the cognitive power of collaborative reasoning.
Member discussion