By Dr Marina Frid*
‘Pessimism is a privilege for those who can afford to despair.’
Professor Payal Arora visited DeepLab on 17 April 2024 to give a distinguished lecture in the Deep Thoughts Seminar Series on Feminist Design Principles for Global Digital Work. She is a Professor of Inclusive AI Cultures at Utrecht University and Co-Founder of FemLab, a feminist future of work initiative. She sat with DeepLab Associate Director Dr Marina Frid for an interview on her research path as a digital anthropologist, her experience working on projects with public and private sector stakeholders, and how the feminist approach contributes to understanding technology and work, shifting away from pessimistic perspectives.
Marina Frid: You define yourself as a digital anthropologist. So, I want to begin by asking you about your research path in anthropology and how you came about studying digital technologies.
Payal Arora: I have spent about two decades plus looking at how young people make sense of digital media in their everyday lives and have a track record of being part of larger practices in the making, like real-life projects, which demand a stakeholder consortium to enable massive changes in the local society, particularly addressing inequality. So, for instance, my first book was wrapped around a project in the south of India, which was about digitizing an entire village by having what they call the India Stack, which proposes to make information integration seamless. So, there were a lot of foci on how the digital can reshape the way in which these villages can move from being so-called backward to leapfrogging into the future and, you know, all that policy hyperbole. The project was driven by the World Bank and Planetary, a think tank, alongside Hewlett Packard and other actors. Another project I got involved with was on medical diagnostics because there is a huge shortage of doctors in most villages around the world, particularly in the Global South, which was in the Himalayas involving a combination of ministries, Hewlett Packard, and a medical diagnostic software company in the US.
So, over the years, I have worked in these kinds of arrangements of public and private sector actors. I usually function as an action researcher, where I am a liaison between them, doing knowledge capture, but often also implementing and trying to be the intermediary in translating insights by putting at the centre those most marginalized and who are going to be the most impacted in the way in which these major digital projects get deployed. That has been very much my preoccupation.
Now, as you know, technologies have shifted. It used to be about computers, then there was a big era of mobile phones, and everything was mpeg, ‘M everything,’ right? Then, it became all about the digital in terms of the internet, then big data, then algorithms, and now it is about data sets and AI. But, at the core of it, it is about understanding human behaviour and how humans meaningfully integrate technology into their everyday lives. How do they assess risks? How do they build relationships? And, in today’s terms, it is why I started to speak increasingly about being a digital anthropologist because much of our lives, from work, love, play, and everything in between, is giving rise to a platform. And it became even more prescient during COVID times when we really experienced [platformization]. Still, that does not mitigate the fact that these are mediating technologies. And, at the end of the day, it is very much about our lived experiences with these technologies that count.
MF: You mentioned love and play. Your research and books like The Next Billion Users: Digital Life Beyond the West (Harvard University Press, 2019) discuss leisure as a significant factor in the widespread adoption of technologies such as the internet, computers, smartphones, or social media. And you also discuss how aid agencies and the Global North, broadly speaking, often fail to understand that. Why do you think that is?
PA: I think the problem is that the term leisure seems to trigger a kind of belief that we are undermining very serious issues of poverty, exclusion, and marginalization as if we are trivializing that state of being. But on the contrary, if you really, deeply delve into the politics of leisure, Leisure Studies, you see leisure is one of the most fundamental acts and spaces within which human beings have self-actualized. Because leisure is unstructured time and space within which people, particularly marginalized populations, have been able to play with the rules of the game. And you see that in the way in which people claim unmarked spaces for protest, like, historically, in urban parks, from creating the Speaker’s Corner around the UK and beyond to fighting for workers’ rights, any of these major movements has taken place in public parks, public squares, unstructured spaces meant to be for families to stroll around. What the state wants to do is create safety valves for populations. Historically, many of these leisure spaces have been avenues where states realized there was a deep amount of urban congestion. Therefore, they needed to create safety valves for the public so it would not, you know, get into sort of a mutiny state. But these spaces also became instrumentalized by those from below to speak or, for instance, in Chinese parks, taking on certain kinds of practices, religious practices forbidden by the Communist Party.
In today’s terms, the digital is a sort of a new urban park. It is a kind of leisure space that is relatively unstructured. And again, the state is using it as a safety valve, like the famous cute cat theory – I think it was [Ethan] Zuckerman who said the internet is all about cats, right? It seems facetious, but, actually, if most of the time you are using these spaces for play and sociality, it is easier to infuse within it your aspirations, your concerns, your politics, and even more difficult for the state to censor you. Because when you censor it, you are censoring an entire lifestyle and livelihood. Basically, you are telling them “You cannot share cats.” And that is the political line you cross that can create havoc because, most of the time, yes, people use digital technology to build relationships and do non-political things. This complicated hybridization gives a lot of power and safety in numbers and safety in obfuscation with play and protest. So, I have been pushing forward that [acknowledging digital leisure], on the contrary, is not undermining poverty but creating a space within which the global poor can effectively exercise their rights, mobilize, and potentially create real social change from within and from below.
MF: How do you think this lack of understanding on the part of companies or aid agencies can be detrimental to tech design? Or, have you seen it as harmful to tech design in your work?
Firstly, I have been working with stakeholders for so long, for decades, and you need to build a little empathy for why they think in those terms, right? If you are in the aid agency sector, you must show concrete social-economic deliverables. So, it must be very utility-driven, and usually, funders are asking, “How is X technological intervention enabling jobs, education, or health care?” And so, the questions are fundamentally not what people are doing with these tools but “tell me exactly how this tool is going to cause this.” What we found is that, oftentimes, [aid agencies] have little choice but to report and target the funders’ particular request, even though much of the data is pointing towards other practices. For instance, that project in the south of India, which I was involved with and with the World Bank, Hewlett Packard, etc., was trying to capture how the community can leapfrog into the modern era through jobs, education, etc. But most of the time, people were using [the project’s] computers as a television, and kids were using the kiosks for video games. In fact, they literally would call the project’s vans, which we used to move from village to village, movie vans. They thought it was like entertainment. However, that did not get captured [in the report] because then, basically, aid agencies get worried that they will not receive further funding because they are wasting away the funder’s money, right? So, what they do is they create a fictionalized report, which focuses only on the deliverable, which further exoticizes these populations as, you know, people who are ‘virtuous poor,’ as I say. They, somehow, unlike our kids, will mainly learn chemistry, biology, or mathematics. But the average everyday Global North kid will not, right? Anyone who has a child knows that, if you give a kid a mobile phone, they are not looking at biology and chemistry. They are playing games and socializing with their friends. Most of the time, they are engaged in those kinds of so-called non-instrumental usage. But, on the other hand, we also know today that much of this non-instrumentality is easily repurposed for instrumental purposes because you learn how to navigate technologies, you start to build relationships, and you may learn and get inspired about new forms of work. We now have massive amounts of evidence about that, including in gaming communities, which require a lot of team-building and organizational work, tactics, strategies, and the like. So, multiple digital literacies get built, recirculated, and repurposed.
On the other hand, with tech companies, it is a different ballgame. They are very much tuned into the fact that most of what people do is oriented toward digital leisure because they have the numbers, right? So, you have, for example, Jio, which is a multinational telecom service backed by Reliance in India, and their main marketing strategy is the ABCD strategy. They saw the data and realized much of what Indian people do with internet data is pivoted towards four directions – Astrology, Bollywood, Cricket, and Devotion content. So, they were like, “Okay. So, this is how we get them to use the internet and stay on the internet.” Hence, companies are much more open because they want to make a profit. They are not as ideological as, say, think tanks or even academics in, for example, development studies, who are deeply ideological about it. I think, when you bring them together, which does not happen as often, you can come up with some real ground-breaking ways in which we can move ahead together.
MF: Shifting a bit, you are a co-founder of FemLab. Could you tell me a little bit about FemLab and why the feminist approach to work and technology?
PA: Usha Rahman, who is at the University of Hyderabad, and I co-founded FemLab in 2020. We received generous funding from the International Development Research Centre (IDRC) just in time for the pandemic! It was a very interesting opportunity to be able to hire a wonderful team in India and Bangladesh to examine six different sectors – salon services, sanitation, construction, gig ride-hailing services, and artisan services – and see how they were all being platformed. And we focus particularly on women workers because, at this point in time, they are going through a double marginalization. Since 2018, there has been a decline in women in the workforce, particularly in India. And, as you can imagine, the pandemic amplified that statistic. And, as work gets increasingly ‘platformized,’ we know there is a significant difference in access and usage between men and women, particularly not so much in terms of affordability but more cultural norms that are dictating access and usage now. In the last decade, things have quite remarkably changed in terms of the economics of access. Data plans and mobile phones have become increasingly cheap. But culture is a sticky factor, which is usually very patriarchal. So, for women to be online, their reputation may get tainted, if they put their profile on. You could be considered a “loose character”; it can impact your marriage prospects. People will start to gossip about what you are doing. If you are in the ride-hailing sector, you could be interacting with strangers. And there is a whole feminist literature on women engaging with strangers in public spaces, particularly in the Global South. So, we believe that what FemLab does, intrinsically, is focus mainly on women workers who are at the margins and other marginalized groups. We, of course, look at the intersectionality of their identities and their positionality and how they engage with digital tools to enable themselves with a little more freedom to self-actualize, build room for livelihood opportunities, collectivise for better work conditions, and other such foci for their well-being.
It is very important not just to focus on what the concerns and the risks are. We want to move in a much more positive direction regarding what these women are doing, which is giving them agency and what are the kinds of design interventions and policies that work for them, so we can focus on what is also working. Because academia tends to be so focused on what is wrong and what is breaking. We have invested so much into the critique and far less into identifying what works and how we institutionalize it as a default mechanism into the design, so we do not have to reinvent the wheel. And how do we globalize it, right? So, FemLab is, basically, about putting the insights of marginalized women workers at the centre and translating these insights into actionable inclusive design interventions and policies. We work closely with interdisciplinary groups from many parts of the world, like Brazil, Kenya, Bangladesh, and India. We have lawyers, artists, activists, feminists, and anthropologists. That makes for a very interesting group that takes on board and attracts different kinds of funders, partners, and affiliates, as you will see on our website. I think that makes FemLab stronger and reflects how I have always operated. If you are not working in a sort of stakeholder faction, they will not be by it. At the end of the day, we need to sit with people we do not quite like. We must work with tech companies and particularly identify the critical change-makers from within because there are still good people in bad systems. And I think, in academia, we often clump people and make them into monoliths, like ‘the state,’ for example, is a fascist state or authoritarian state. No, actually, there are many good people within governments at diverse levels, from women as city councillors to a village official, who do not reflect the ideologies of, say, the national federal government. And, if we do not recognize that, it means we are very naive and have not done any action-oriented work at the ground level. Nobody who has done that kind of work can look me in the eye and say that all government officials in even the most authoritarian context are deeply authoritarian. Right? Not all people in technology companies want to oppress and control the world and extract all the data for their own profiteering. We need to have a certain kind of moral honesty and humility in academia to recognize that these are complicated scenarios. We need to be more political, in a sense, deploy soft diplomacy, and take the leadership because we are one of the few stakeholders that does not have a vested interest. That is also why we set up and nurture FemLab because we think this is part of our civic duty. We are part of a public university system funded by taxpayer money. We need to take the lead rather than come up with a doomsday scenario, as I discuss in my next book.
MF: You gave me a good cue. Your next book has an intriguing title, From Pessimism to Promise (MIT Press, September 2024). What can you tell us about it?
The book is about how we must take lessons from the Global South on building inclusive tech. I am criticizing those who are deeply vested in perpetuating a very singular binary narrative that says technologies are intrinsically racist, oppressive, sexist, extractive, or colonialist, which demands that we break down control tech and suggests that agency is against the machine and that we are also against the state for extracting us. As you can imagine from what I have just said, that is not how I see the world. On the contrary, we need to engage with them [tech companies and governments]. That is how you see possibilities of self-actualization. Much of the critique comes from the Global North, particularly in academia, more so in the humanities and social sciences. We have a binary of thinking within academia that is a significant obstacle to interdisciplinary work, which is in demand right now. So, that is one layer.
Second, that binary does not allow us to see that technologies are also instruments to enable us to fulfil our aspirations. It is hypocrisy to complain about it on social media with a massive follower count, as we wear our Apple watches, unable to live without the internet even for an hour. Right? You must get online. And then we expect the Global South to not have the same kinds of aspirations. So, what I argue is pessimism is a privilege for those who can afford to despair. Optimism is the only choice for the rest of the world who have a deep and desperate longing to get an alternative future here and now, not for their children, not for their grandchildren, but for themselves. And we see that reflected in the way in which young people see digital tools. Anyone who goes to the Global South shows that they are excited [about digital technologies]. And it is not because they are naive about the risks and harms. Many of them understand them. But it is always in relation to what their current socio-economic situations, their materialities, positionalities, options, or choices are. They see the risks we are taking as a trade-off but also as a way for demanding better systems. These instruments, if used well, can empower us. That is why there is little room for the current pessimistic discourse. That is why the feminist approach; it demands this shift away from binary thinking. “Are you an optimist or a pessimist?” is an irrelevant question. “Are you for tech, or are you against it? Which camp do you belong to?” So, the good thing about the feminist approach is that it puts at the centre values such as care, collective, and creative agency, not just that of the bottom-line profit theory. We demand and see that the only way we can move towards a sustainable and nurturing well-being for all of us and the planet is through collective action, including with various stakeholders.
MF: Finally, you were talking about interdisciplinarity. What can anthropology offer in this scenario of the increasing pervasiveness of digital technologies in our lives?
Anthropology is intrinsically about understanding the culture of practice. And within the culture of practice, we get to understand why people do what they do. Other kinds of data collection and methodologies tell us what is happening. And then it may seem like a pessimism paradox. How come technologies are so oppressive and colonialist? And yet, what is wrong with these people? Why are they still using it? Why did they not get out of Facebook after the Cambridge Analytica scandal? Since Snowden came to the public, why are people still using mobile phones? Why not give them up? What happens is that these hanging questions of paradoxes linger as these are the most complex, confounding issues. And anthropology unpacks them by understanding that, well, people make their decisions on an everyday basis because of certain kinds of reasons, which are tied very much to a sort of everyday rationalism, rational optimism, which gets manifested in actions like, well, I will allow my child to play in a game. I am going to put my profile photo, even though I am going to get misogynistic comments, but I really want to share these things because I know there will be groups that will value them. We are constantly making these sorts of decisions and assessments of risks and opportunities. And, if you do not build that empathy by understanding why people do what they do, then we replace it with convenient narratives, which get perpetuated, particularly on a very condescending level about the majority world. These narratives say that, first, “They are surely not literate.” “They do not know.” “They have no clue.” So, “these naive, illiterate, poor people have no idea they are being subjected to surveillance machines and being oppressed, right?” That is a very singular and popular narrative, even amongst academics. Then, it becomes all about literacies. If only we provide literacy to all of them, then they can finally see how oppressed they are and they will give up technologies, of course, while we continue using them.
Then, there is a second view that, well, it is considered a state of learned helplessness. “They are stuck.” “They have no choice but to be stuck on a platform.” In fact, that is the title of a book by a self-proclaimed pessimist, Geert Lovink, from the Internet Network Cultures. He has written a lot of these books saying, “Well, they literally are crying, but they have no choice” because – and this is a full body of scholarship – the state demands that you are online. And that is true that it is increasingly not an option. But, on the other hand, to say that people are completely paralyzed and helpless because they are trapped in this digital identity system? Excuse me, we all have digital identity systems, there is a reason for that. Yes, we are visible to the state. But we also get to ask the state to be accountable to us. Through that identity, we also get to have access to rights, and we can get access to benefits. It is a two-way path. I think these are the kinds of [pessimistic] narratives that get perpetuated when we speak in binary terms.
*Dr Marina Frid is a UCD Research Fellow in the School of Geography at University College Dublin, acting as Associate Director of the Digital Economy and Extreme Politics Lab (DeepLab) and Co-Coordinator of the ERC-funded WorkPoliticsBIP project.