2 min read

The hidden cost of AI literacy

Training students and teachers for AI literacy is all the rage, but this comes at an opportunity cost. There are other things teachers should now learn.
The hidden cost of AI literacy
Mosaic of Plato's academy, Pompeii

After an initial period of trying to block and ignore the technology entirely, higher education is now preparing itself to truly adapt to the existence of generative artificial intelligence (GenAI). That often means adjustment of teaching and assessment, to deal with the possibility that students cheat on take-home assignment. It also means teaching students about GenAI (and that mostly means about large language models) in an effort dubbed "AI literacy".

As The New York Times recently reported for schools, it varies from institution to institution what is meant exactly by AI literacy. It can be about the ability to prompt a chatbot, knowledge about the principles underlying artificial neural networks, use cases within a specific field of expertise, insight into the technology's failure modes, self-regulation techniques to avoid cognitive offloading, considerations about sustainability, knowing when not to use it, and so on. Anything that can help you be a more effective, aware and responsible user of GenAI can count as AI literacy. Or, more bluntly, whatever puts you back behind at the steering wheel.

It makes sense that higher education wants to teach such competencies. GenAI use among students in higher education can be as high as 90%, so it is important that universities help them to do so in a responsible manner. UNESCO has been promoting AI competency frameworks for both students and teachers, taking the view that literacy can empower young people in future society. Within the European Union, it is arguably mandatory for any educational institution that makes use of GenAI during teaching and learning to provide AI literacy training.

However, the broader the conceptualization of AI literacy, the more demanding it becomes to teach. It is one thing to expect a teacher to understand prompting, bias, and the risks of cognitive offloading. It is quite another to require them to master the principles of deep learning or navigate complex ethical debates on digital autonomy. While valuable, this preparation takes time and money—two of education’s scarcest resources.

At same time, there are plenty of other things to invest in. After all, GenAI does not only call for AI literacy, it also calls for changes in teaching and assessment across the board – changes teachers may be equally unprepared for. For example, the declining reliability of written assignments as a measure of student ability is leading to an increased focus on the process of ideation and writing, as opposed to their outcomes. This requires a mode shift for teachers, who may need to start including progress meetings or coaching work into their courses. Rather than grading papers, they will have to be mindful of how they can measure cognitive flexibility during projects, or persistence and grit.

The same story goes for critical thinking. This is often heralded as the proper response to risks of cognitive offloading or reliance on unreliable LLM output. However, higher education is mostly ineffective at teaching critical thinking, because it requires pedagogies that are not widely spread. If GenAI makes critical thinking extra important, teachers will need to learn how to model the necessary dispositions, skilfully facilitate dialogue and master techniques to orchestrate collective enquiry. These, too, require professionalisation.

Going hard on AI literacy for students implies doing the same for teaching staff, but such a strategy will come with the opportunity cost of not preparing faculty for the deep adjustments in modes of teaching and learning that are just around the corner due to GenAI. I would rather see AI literacy as a small, modest endeavour, so that we can focus on the teacher skills and dispositions we will need for the rest of the curriculum.