3 min read

Against human gullibility

Just prior to the corona virus pandemic, the cognitive scientist Hugo Mercier published the book Not Born Yesterday, in which he argued that people are, generally speaking, not very susceptible to false beliefs or poor reasoning (Mercier, 2020). At the time, it was hard to ignore that 5G towers were burning while Mercier claimed that people who subscribe to seemingly absurd ideas hardly ever act on them. More recently, Mercier has published a paper in which he again sets out his overall argument (Mercier, 2021).

Interestingly enough, the global events of the past year have not changed Mercier’s reasoning. His position remains that we can separate beliefs into intuitive beliefs and reflective beliefs, with the former being actionable and based on direct observation or inference, while the latter are metarepresentations that are insulated from cognitive processes that cause behavior. Absurd false beliefs are reflective, he says and because such beliefs have few practical consequences, humans never evolved critical defenses that place them under scrutiny.

I do think that Mercier is on to something with this account. Certainly, when topics are close to our daily experiences, we can talk about them while substantiating our claims, asking critical questions and imagining alternative scenarios. Whether we are picking a holiday destination or interrogating a flatmate about the last time they did the dishes, we are pretty good natural reasoners. And indeed, this can fall apart in more abstract or distant domains, such as when we explore the possible health effects of newly constructed 5G towers. To me, it remains an open question whether this is due to a scarcity of substance to feed into the reasoning process or because there is a playful disconnect between such reasoning and reality.

Yet I haven’t seen good evidence that the different forms of reasoning also leads to categorical differences in beliefs. In his recent paper, Mercier supports his claim via an example that he also used in his book: that self-professed believers in the pizzagate conspiracy theory hardly ever did anything more consequential than leaving negative reviews on the Google Maps listing of Comet Ping Pong, with the rare exception being a man who carried an assault-style rifle into the parlor. To him, this illustrates that believers rarely fully subscribe to absurd ideas, yet to me it mostly shows that the line between intuitive and reflective beliefs can be crossed. Similarly, setting 5G towers on fire, refusing vaccines out of fear for gene therapy or trying to vote out the reptilian presidential candidate are all actual behavioral responses to an apparently false belief. In other words, they cast doubt on the insulation of reflective beliefs.

Mercier’s other claim about the popularity of false beliefs is that they can serve a social purpose, distinct from an epistemological one. I think this is a very important consideration: it’s not just that conspiratorial ideation can offer you a community, but that sharing information about outside threats can boost your social standing among peers. This social motive can explain the spread of rumors (including fake news) without even considering belief in the rumors. Indeed, I suspect much of the fake news cycle is driven by non- (or semi-)believers pumping around false information that then occasionally finds its way into the minds of belief-biased receivers. Mercier claims that something similar happens with justifications: pre-existing beliefs create a market for reasons, so if you can supply reasons (even if they are not great), your social standing improves.

There’s a lot to be said about this, since being inaccurate can damage your reputation, too — as Mercier argued in his book, people who spread rumors often label their rumor as second-hand information to limit their culpability in case the news is exposed as false — but in general I think the social motive explains a lot. Still, this motive is not a strong argument against human gullibility — it just replaces the idea of an easily impressed mass by one in which epistemologically detached but status-hungry individuals pass on information in long chains, occasionally shaping the beliefs of particularly biased individuals. To be honest, I would still find such a population susceptible to misinformation.

In reality, human groups are fortunately not like this. Through cognitive variation, some people are more inclined to question shaky reasoning than others. These individuals — who have a so-called high need for cognition — are likely to interrupt shaky information chains. Additionally, doing so can boost their reputation, as uncovering false beliefs is a strong signal of being trustworthy. Besides individual variation, there are also social structures that support human reasoning, whether through habits of thought, a public sphere or education providing important world knowledge. These structures help to generalize the skills that we use in day-to-day reasoning, so that we can employ them in more abstract, uncertain or distant domains.

Ultimately, I share Mercier’s conclusion that human groups are not gullible masses that are easily misled. However, I do not want to dismiss the significance of prevalent false beliefs — even if they seem to be behaviorally insignificant, they can suddenly become relevant and actionable. This is why we need education, norms and a public sphere that support, promote and strengthen human reasoning.


Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe.

Mercier, H. (2021). How Good Are We At Evaluating Communicated Information?. Royal Institute of Philosophy Supplements89, 257-272.