3 min read

Levels of machine reasoning

Levels of machine reasoning

Upon its release, OpenAI's GPT4 was advertised as having better reasoning capabilities than its predecessors. In a trivial sense, this is probably true – earlier versions of the large language model (LLM) displayed apparent reasoning to some extent, and a scaled-up, better-trained version can be expected to do the same things, better.

Yet as the claims and demonstrations of these purported reasoning features come in – and Microsoft has recently catalogued them with the look and feel of a research paper – it is getting increasingly unclear whether the distinction between apparent and actual reasoning is made properly, both by the companies selling the product and the users impressed by its performance. It doesn't help that human intuitions are at a loss here – we judge machines generating language much like we would judge humans generating the same language, regardless of what is going on under the hood. At the same time, the things we conclude are about underlying traits, namely whether machines are intelligent, knowledgeable, reasonable and whatnot.

This makes current discourse on machine cognition and LLMs reminiscent of the conflict between behaviourism and cognitivism that once gripped psychology. For the behaviourists, psychological phenomena are best measured as behavioural responses. Cognitivists, in return, argue that psychology benefits from studying the mechanisms that underlie behaviour, whether in terms of information processing or neurobiological functions.

The discussion on LLMs is often like this, too, with those taking something akin to the behaviourist position describing the language generated by the model as examples of reasoning in itself, while the "cognitivists" look at the underlying computational mechanisms to assess whether these meet the criteria for reasoning.

Thinking in terms of behaviour versus mechanisms leads to some complications, however. Any behaviour can be part of a mechanism (e.g. if I walk to the fridge to grab a drink, that is both behaviour and a mechanism for drink-getting) and any mechanism consists of units doing stuff (e.g. the mechanism of a chemical reaction is described by the behaviour of its molecules). Although neuroscientists are often deep into cognitivism, you could by this logic also consider them behaviourists for neurons.

One way out of this confusion is to think in terms of levels of explanation. I am actually not a big fan of this – the next edition in the series on causal maps should be on levels, and I have been postponing that for a long. long time because I think 'levels' are ill-defined – but I believe that if so-called multi-level thinking is applied cautiously and for specific cases, it can be useful. Briefly put, multi-level thinking means that you consider any process as behaviour in relation to the level "below" it, but as a mechanism in relation to the level "above" it. For those who have read the causal maps series, this ties in with the notion of constitutive explanations.

Types of explanation
Science deals in explanations. I know this statement makes some philosophers of science uneasy. Science cannot generate complete certainty about explanations, but can only generate models of the world that can predict future measurements. So the product of science is predictions, not explanations, r…

In the case of LLMs, you could state that the output of linguistic utterances by a system like ChatGPT is behaviour at one level, with an algorithm its mechanism at one level below it. For now, let's stick to just these two levels and let's (temporarily) label them the utterance level and the algorithmic level. This scheme should not be confused with Marr's levels of analysis, which is a more generalizable multi-level view of information processing.

With this framework in mind, we can define which level-specific requirements a machine should meet to be reasoning. For example, at the utterance level, you might want the machine to give responses that correspond to correct answers, thinking steps, justifications or explanations. At the algorithmic level, you might want to see the machine model causal relations that are then used to generate an answer. I am not in the position to give an exhaustive list of requirements, but I hope that the distinction between these two levels will prove useful in future discussions about the capabilities of machines.