Skip to main content

By Rashmi Guha Ray *

‘AI regulation is going to be the most contentious topic in the coming years as it will have huge impact on a planetary scale’

Sahana Udupa is Professor of Media Anthropology at Ludwig-Maximilians-Universität LMU Munich and Principal Investigator of For Digital Dignity, Projects ONLINERPOL and AI4Dignity. She is one of the leading figures in research on hate speech in digital spaces. She visited Deeplab on 21 November 2023 to deliver a talk on ‘Templated Sexism: Digital Influencers and Gendered Extreme Speech in India’. Deeplab PhD candidate Rashmi Guha Ray spent some time with Professor Udupa talking about artificial intelligence, hate speech in disparate geographical contexts, and the beauty of ethnographic research.

Rashmi: At a time when concerns are being raised about the use of artificial intelligence (AI) in academic pursuits, how do you think AI will influence political content generation or creation?

Sahana: AI has already begun to shape political content generation as well as academic research, the latter being a very different field of course. In the space of disinformation research, many scholars are looking into the role of AI assisted tools in augmenting the conditions of creating and circulating disinformation and problematic content. The most recent research has shown that AI-generated profile pictures are playing a very big role in disinformation campaigns. For a very long time, anti-disinformation activists used reverse image searches to find the authenticity of the sender. But now, with these AI-generated profile pictures, which look really convincing, this mechanism of verification is challenged. So, inauthentic content is now delivered with seemingly authentic images and that is one role the AI is playing.

With digital mediation, sexualisation of political discourse has also become very rampant. The availability of various morphing techniques and access to pornographic tropes has made it easier to produce and disseminate gendered extreme speech, especially against female politicians and public figures.

It is a given that in the wrong hands, AI could be a tool to amplify problematic content. But at the same time, in our field of work, we have engaged with machine learning models to detect hateful expressions and the different forms of extreme speech. I run a project called AI4Dignity, which is a collaboration between ethnographers, fact checkers, and AI developers. It covers four different countries – Kenya, India, Germany, and Brazil and we have tried to see if we can generate context-sensitive training datasets for AI models to pick up context-rich and relevant hate speech passages. Here we do not use the term ‘hate speech’ but ‘extreme speech’ and we have created an interface where people can input the passages in English, German, Swahili, Brazilian Portuguese, and hybrid expressions. In this project, we examined the limits and potential of AI in detecting problematic content. We found vast discrepancies in corporate content moderation although the very deployment of AI is now seen as necessary because of the volume of data that is generated every second. We have proposed “ethical scaling” as a framework to push for people centric processes in AI-assisted content moderation and to simultaneously question corporate hunger for data. This framework highlights the importance of regulating AI.

R: In that case, who do we trust with the regulation process?

S: AI regulation is going to be the most contentious topic in the coming years and industry leaders are already in conversation with government leaders across the world, and global summits are being organised to this end. The EU has now promulgated the AI Act, which is going to be a path-breaking regulatory framework for AI-related issues. It is not going to be easy, because corporations are motivated to monetise AI technologies and garner maximum profit. It is also going to be a huge challenge for conscientious stakeholders to rein in their aspirations and ambitions, which is primarily driven by market interest. But a sound regulatory body is the need of the hour, and the state needs to be involved. But to ensure that there is no misuse of power by the state, there should be multi-stakeholder involvement with the civil society taking an active part in these conversations. This community-centric approach is very important in our opinion. The UN should also play a major role in bringing member states together because this technology can have a huge impact on a planetary scale.

R: You have vast research experience in both the Global North and the Global South, spanning Germany, India, and more recently Brazil. How do you think extreme speech differs in these two worlds?

S: As an anthropologist, I have to admit that there are similarities as well as differences. The differences emerge from the vast historically fraught contexts that define these worlds. One cannot think of the context of extreme speech in the same way in Germany as they would for India and Brazil. A very good example would be the way hate speech is regulated in countries like Germany. Here, just the content can be deemed illegal; for example holocaust denial is illegal speech. Regardless of what impact the speech could have or whether it has led to a real situation of violence being incited, just the content can invite legal action. This is a very different regulatory framework compared to countries where there is much more leeway in what you are allowed to say. The other important factor is the difference in political party systems – whether it’s a multi-party system or whether there has been a capture of the democratic setup by some kind of autocratic regime. These differences need to be kept in mind while tracking extreme speech and understanding its impacts.

The third important factor is the target group and here we see both similarities as well as differences. In Germany the bulk of extreme speech today is targeted against immigrants along with Islamophobia. We have also documented gendered extreme speech. In India religious minorities are the biggest targets along with people who are critical of those in power. In Brazil we have seen right wing groups mobilising and attacking people of colour. These differences can be attributed to the disparate historical contexts in which these practices emerge.

However, what is intriguing is the commonality, something we refer to as the global conjuncture of extreme speech. We are talking about the role of social media companies and platforms that are equally available in all these countries and this has given rise to shared templates and practices. Right wing exclusionary discourses today bear striking similarity. A Trump supporter would use the exact language that a Bolsonaro supporter would be using. These linguistic practices as well as practices of formats have become an important method of conveying xenophobic and exclusionary sentiments.

R: What advice would you give to the budding ethnographers at the Deeplab or to any early career researcher embarking on ethnographic fieldwork?

S: Ethnography is an art of gaining and sustaining trustful relationships. These relationships are important not only during fieldwork but also after. There is never a completion of your ethnographic venture. Say for instance, if you were to do content analysis, you extract a corpus, you apply certain analytical tools, and then you arrive at a set of findings. Ethnography, however, is a continuous journey and that has to be kept in mind when we forge relationships, because often we bank on our existing relationships to enter the field.

In digital ethnography, the continuity of this journey becomes even more pronounced because the field ‘haunts’ us no matter where we go. Unlike physical field sites, we can never exit our digital fields completely. In Digital Unsettling, a book that I have co-authored with Gabriel Dattatreyan, we talk in detail about what happens when ethnographers begin to navigate social media as a research tool and as a medium to communicate research.

Secondly it is essential for ethnographers to actively take decisions to pause our fieldwork to process our observations and devote time for reflections.

The final part about ethnography, which in my opinion makes it a beautiful methodology, is our reflexivity. It is important to acknowledge and reflect on our own positionality and not take all the observations for granted. What we call research data is rife with human stories and often agony in our case. This needs to be explored and thought through as we analyze the lifeworlds of our research partners entagled with our own.

*Rashmi is an ERC doctoral researcher at the UCD School of Geography. She holds an MA degree in Conflict, Governance and Development from the University of York, UK, and an MA in History from Jadavpur University, India

Update Version Published on 15 April 2024

Image Source: LMU – Newsroom –  https://www.lmu.de/en/newsroom/news-overview/news/proof-of-concept-grant-for-sahana-udupa.html