Critical use of GenAI: Information
Previously, I discussed ways to boost critical thinking if students are using GenAI for creation. Next on the list is Information, which can be separated into searching information, learning about something more broadly, summarising some content or analysing data (Brachman et al., 2o24).
| Category | Subcategory | Description |
|---|---|---|
| Creation | Artefact Idea |
Generate a new artefact to be used directly or with some modification Generate an idea, to be used indirectly |
| Information | Search Learn Summarise Analyse |
Seek a fact or piece of information Learn about a new topic more broadly Generate a shorter version of a piece of content that describes the important elements Discover a new insight about information or data |
| Advice | Improve Guidance Validation |
Generate a better version Get guidance about how to make a decision Check whether an artefact satisfies a set of rules or constraints |
In a survey I ran within a research master's programme I am involved with, this particular category of use cases was most popular with students. It is easy to see how all of these could support learning, although there are some didactic caveats.
- Reliability of LLMs is iffy and it might take the very expertise that students are developing to quickly notice that output is going off track.
- Creating a summary trains the skill of separating major and minor issues, which means bypassing manual summarisation could lead to deskilling.
- The broad strokes approach of LLMs could encourage surface learning rather than the deeper understanding that teachers are looking for.
- Understanding a text or researching a topic involves metacognitive skills, to reflect on what you do and do not understand. Being led by a chatbot can bring about cognitive offloading (this relates to both surface learning and deskilling).
Critical thinking is supposed to solve all of these things, but how do you get students to engage critically if they are using GenAI to obtain information?
Search and Learn
In the Search and Learn use cases, two steps from the five-step thinking process seem to be delegated to GenAI: information gathering and sense-making. It may of course be that the verbose output of LLMs also touches on other steps, even going so far as to suggest actions without getting the student to reflect on the information steps and form their own beliefs. Similarly, it can be tempting to request information from LLMs without really considering whether it is the information you need.

To support critical thinking, learners students should engage with the problem identification and belief formation steps without using GenAI. The first step comes down to being as precise as possible about what you understand what you don't understand, prompting the LLM to provide focused information. Instead of "Tell me about photosynthesis," the prompt should be: "I understand how plants breathe, but I don't understand the specific role of ATP. Explain only that part."
The focused question serves two purposes. First, it hopefully leads to a focused response, so that the student would have to be cognitively engaged to place that piece of the puzzle into pre-existing knowledge. Secondly, it becomes easier to verify the response, for example by comparing it to course materials or other authoritative sources. This in turn allows for reflection on the GenAI output to form beliefs: "I used to think X; the AI said Y; the source said Z. Now I believe [Student's Synthesis]."
Summarise
There's only one way to critically engage with summaries generated by LLMs and that is by also reading the source article. If a student (or anyone else really) first reads a paper and then has the LLM summarise it, there's a risk that the summary is deemed good enough out of complacency. I suspect a better way is to first generate and read the summary and only then read the paper, with the explicit task of finding errors in the summary. This may leverage the fact that people are better at spotting errors in the work of others than that of their own (Trouche et al., 2016).
The GenAI summary will improve if it is annotated by the student, while the student will carefully read the paper this way. GenAI summaries can be a useful first sweep to see if articles might be interesting to read more closely, but should not be used as reliable guides to article content.
Analyse
Critical engagement with GenAI-mediated data analysis is perhaps the biggest challenge within this Brachman category. After all, the quality of an analysis depends on the validity of its steps. When using GenAI, these steps may not be transparent at all.
One possible way around this is to consider the analysis as a measurement, which should simply be repeated. Running the prompt multiple times or using multiple models for the analysis can give an idea of how trustworthy a particular outcome is.
Another way is to critically question the LLM. Can the LLM provide counterarguments to their conclusion? Can it suggest alternative ways to analyse the data? Do these yield different outcomes? If so, why is that? While it is in fact nonsensical to ask such questions to a machine that lacks understanding, it does have the effect of showing the variation of responses that the LLM could yield. That variation can then prompt critical evaluation from the part of the student.
Best practices
The above examples show that critical engagement with LLMs is a matter of using best practices: prompting for focused answers, manually annotating AI-generated summaries and running and comparing multiple analyses. Critical thinking is again dependent on the process through which a student uses GenAI, as well as the degree to which a student can avoid cognitive offloading. The role of the teacher is to clarify and validate best practices for remaining cognitively present, or perhaps to monitor or even assess them.
In the final Brachman category (Advice), the situation is more complicated. Here, the GenAI is tasked to critically assess the work of the student, as if it could play a metacognitive role. The student needs to avoid taking the LLM-generated advice at face value, differentiating between good and bad advice. Here, critical questioning comes into play.
References
Brachman, M., El-Ashry, A., Dugan, C., & Geyer, W. (2024, May). How knowledge workers use and want to use LLMs in an enterprise context. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-8).
Trouche, E., Johansson, P., Hall, L., & Mercier, H. (2016). The selective laziness of reasoning. Cognitive science, 40(8), 2122-2136.
Others in this series




Member discussion