8 min read

Federica Russo: "Resharing can be an irresponsible act"

Federica Russo: "Resharing can be an irresponsible act"

Dr. Federica Russo is a philosopher of science, technology and information at the University of Amsterdam. She recently started working on SOLARIS, an international research project that aims to understand the impact of new media and technologies on society. More specifically, she will be exploring the spread and consequences of deepfakes, synthetic impersonations that were built using artificial neural networks.

This video by Bob de Jong (2022) shows a deepfake impersonation of Morgan Freeman.

From the perspective of human cognition, deepfakes are interesting because they carry the risk of bypassing our mechanisms of spotting deception. Humans may be 'epistemically vigilant', meaning they do not just believe any other thing, but they are also sensitive to cues that signal trustworthiness and they take the identity and expertise of speakers into account when determining whether or not to believe a message (Mercier & Morin, 2019). Since deepfakes can mimic or synthesize such cues and identities, they might (okay, they will) prove more persuasive than earlier media forms.

The question how the introduction, development and spread of deepfakes will influence our reasoning – both individually and collectively – is therefore a timely one. For this Reasoning Report, Federica Russo will share her ideas on this, explain what project SOLARIS is about and advocate for reintroducing the moral dimension of reasoning and argumentation.


Project SOLARIS interfaces with political science, artificial intelligence, media studies and psychology, and you yourself are a philosopher. Can you explain what the scope of this interdisciplinary project is and what you will be trying to do?

It is a genuine interdisciplinary project, in which we combine elements from the humanities, social sciences and computer science. We are also seeking public engagement with a citizen science approach. That's because we first want to determine why and when people trust deepfakes. This trust will not just follow from technicalities of the deepfake or demographic or psychological characteristics of users, but will rather depend on a system of such features – including, for example, whether people receive deepfakes in a private message or find them on social media. Once we know which elements are important, we can think about interventions. We expect the answer to be nuanced – it won't be as simple as digitally literate people being less receptive to deepfakes.

Secondly, we want to understand how deepfakes travel on social media. We will not be deploying them on real social media – that would be highly unethical – but we will do simulations to get an idea of when and how deepfakes are shared, for which we have good mathematical models. Connected to this we will also explore mitigation measures, for which we have partnered up with stakeholders from the political arena. If there's a viral deepfake with a politically contentious message, does it help to get on the news and declare that there's a deepfake spreading? Is that an effective measure? Will people trust that?

When evaluating messages conveyed about deepfakes, one problem may be that the deepfakes themselves can be persuasive. Synthetic humans can give all sorts of cues – prestige, trustworthiness, membership of our in-group. They can also mimic specific people whom we trust. Are we even equipped to reason properly if our senses are deceived at such a deep level?

A number of people are now investigating how deepfakes are changing our testimony practices, which are the ways in which we learn from each other and believe each other. Deepfakes may really bring an epochal change in how we justify beliefs, because they will affect mutual trust. You might think that looking at contextual cues – such as the source of a deepfake video – can help, but there are reasons to be careful about these, too. If a student gains my access to a digital learning environment and then posts a deepfake video of me stating that the exam has been cancelled, the source may not be suspicious, but the message is still false.

We can end up in a situation in which there is no trust at all. The real danger then is not so much that we begin to believe everything, but that we no longer believe anything at all.

Is there any sort of reasoning or critical evaluation that could save us from being duped? Could we develop new cognitive strategies to better discern fake materials?

I genuinely don't know. Clearly, human beings have managed to make it through big changes, with various degrees of success. We will make it through this. But perhaps the more interesting question is what can we do now as educators – at university, but also much earlier in the learning process – to help prepare the current generation for the coming change.

Younger generations have a much quicker entry into any new digital thing. Yet older generations have the ability, the competence and also the responsibility to take a couple of steps back and to modulate how younger people get into new technology. Young people are much quicker and better users. while older people should be wiser and certainly should exercise their sense of responsibility and duty. Perhaps the older generation can give some guidance.

What kind of guidance would help in this case, then?

I hope we can teach the younger generations to have healthy behaviours online. With regards to democracy, this includes a critical attitude towards social media. Not to believe everything, sure, but also to understand the responsibilities we have, even if we are not the ones making deepfakes. For example, if you doubt whether something is a proper video to share online, then just don't. You could be starting a process that is quickly beyond your control. You can still show deepfakes to friends and discuss them or joke about them, but making them available for resharing can be an irresponsible act.

The underlying problem is that sharing an item online is an unclear social signal. Perhaps your Twitter bio states that retweets are not endorsements, but the moment you retweet, you give the signal that some item is worthy of attention. I think we need to make sure that younger generations understand these processes early in their online activities and presence. And this goes for any piece of information in the digital sphere, not just deepfakes.

🧠
Not resharing dubious content is good moral advice for the information age. It sounds easy, but resisting the urge to share fake news also means missing opportunities to signal your group identity and, as a recent study on online sharing behaviour suggests, might lead to social penalties (Lawson et al., 2023).

That's an interesting take. The less able we are to evaluate the authenticity of specific digital items, the bigger our responsibility to share wisely, lest we deceive others. If we consider thinking to be a collective exercise, then we also need to develop the moral habits that make such thinking go well.

It really is a moral thing. It is about responsibility and about understanding our own role in the digital sphere. In offline life, the borders are very clear. If I go to the supermarket, nobody cares that I am a university professor – I just need to pay at the end. In the online world, the borders are less clear and that can lead to confusion. You need to establish borders yourself.

For example, I had to decide that my online presence is my professional presence – I try not to mix things up. I could write about sports or complain that the Oscars were not given to the right movies, but what is my authority there? I didn't study cinema or anything like that. So I limit my role, but I have to actively do so. The digital environment will not set the borders for me.

We can also influence the shape of this digital environment. You can decide to share photo's of your child's birthday only with a select group of people, for example. This does require an understanding of how the digital world operates and of what your moral role in this world is.

There already seems to be a retreat to more bounded spaces, into private chats and discussion rooms. Yet in a way these are more sensitive to deepfake technology. To speculate, a real-time deepfake inserting itself into discussion rooms for political or commercial interests could sabotage well-intentioned discussion.

I would like to see some more empirical evidence about such a retreat. I am not sure it happens enough and I would like to see prominent people be better role models with regards to understanding and setting such boundaries.

As for the bad actors, this is again about setting boundaries for yourself. What are you expecting from a discussion and what do you accept? If it becomes uncomfortable, you leave, just as you would in an offline setting. Regulating or banning online spaces is undesirable, because these spaces also bring much good to the world. Instead, we should discuss new challenges with each other and cultivate virtues for this new age.  

But let's turn the argument upside down. On the one hand, we want to teach ourselves and each other how to detect signals of potential danger. On the other hand, we can also educate people not to voluntarily harm each other, and not to insert dangerous elements into the online world. We can become much more conscious about the potential effect of seemingly innocent acts like resharing or the design of an online environment. These are things we can teach computer scientists, engineers, our kids, everyone.

Norbert Wiener, who pioneered cybernetics, considered it first and foremost a moral philosophy. He was not after developing cybernetics per se, but cybernetics for helping human beings. There is a moral dimension there that is not just about safeguards or ethics as a watchdog. Rather, the common good is the reason why you develop technology. If we can't go back to that, I don't know what we can do.

Won't there always be people who, either through psychological disposition or calculated self-interest, step outside of such moral restraints?

Yes, which is why you need systems in place at all levels. The individual, the family, the school, government and broader regulation all need to reinforce moral behaviour. To use an analogy: in pre-school, some parents teach their children to protect themselves if another child is aggressive towards them and then report it to the teachers. Others teach their children to just fight back. There will always children who are more rowdy than others, but if a majority could teach the non-violent response, you would reach a critical mass through which overall culture changes.

Obviously, there will always be malicious or careless designers. Yet if there is a strong bottom-up willingness to improve the morality of our lives and if the right systems are in place, then we can be collectively resilient against such actors.

We keep circling back to the foundational role of morality. Yet education as it is emphasizes the cognitive and the intellectual, at least at later stages of learning. Critical thinking training is often about argumentation and evaluation, distinct from moral considerations. Are we still preparing our children, pupils and students for the right world?

This is the next challenge, for sure. We have disconnected the epistemic and cognitive dimension of knowledge from its moral dimension. The dominant culture in philosophy is adversarial and critical – the game is to spot weak points in texts and then destroy them. There is no space for charity, collegiality or anything like it.

Yet thinking, reasoning or producing an argument are actions and all actions carry moral values. I have taught a course on argument-checking with Jean Wagemans, which emphasized this ethical dimension of argumentation, both when analysing and when producing arguments. Producing an intentionally aggressive argument can be worse than punching someone: if I punch someone, perhaps they have five minutes of pain, but if I put an aggressive argument online, it remains there forever and can damage someone permanently.

This connects to the use of technology. Instead of letting technologies emerge and then becoming anxious about how to regulate them, we can also in an early stage ask why they are being developed in the first place. There might be technical reasons, but we should also ask for moral arguments. If something potentially harmful is developed for the purpose of amusement, that is not a very good reason!

Educators are key here – they have an important social role to play. We really need to reintroduce the moral dimension of reasoning and argumentation.

References

Lawson, M. A., Anand, S., & Kakkar, H. (2023). Tribalism and tribulations: The social costs of not sharing fake news. Journal of Experimental Psychology: General. doi:10.1037/xge0001374

Mercier, H., & Morin, O. (2019). Majority rules: How good are we at aggregating convergent opinions? Evolutionary Human Sciences, 1, E6. doi:10.1017/ehs.2019.6


Reasoning Report is a monthly newsletter, sent directly to the inboxes of subscribers. In addition, subscribers can comment on all posts on the Connecting Cells blog. If you're interested in receiving the Reasoning Report, you can subscribe using the button below.


Further reading

Interested in more by Federica Russo? The below papers connect well to the topics discussed in the interview.

Brave, R., Russo, F., & Wagemans, J. (2022). Argument-checking: a critical pedagogy approach to digital literacy.

Russo, F., Schliesser, E., & Wagemans, J. (2023). Connecting ethics and epistemology of AI. AI & SOCIETY, 1-19.