AI as colonial knowledge production: The resistance begins here
In recent months, both the tech press and general media have been full of hype about artificial intelligence (AI), and specifically ChatGPT and rival advanced AI programmes. These promise to produce knowledge in radically new ways that threaten to bypass human agency, while demanding our attention.
There are good reasons to fear AI’s consequences for institutions such as universities, whose main asset is their ability not only to disseminate knowledge, but also to produce new knowledge in the form of research.
But universities can also be a key site of resistance to the imposition of AI across society. Indeed, unless universities take seriously their responsibility to resist the uncontrolled rise of AI, the prospects are bleak for the idea of socially produced knowledge on which they, as institutions, depend.
That AI has a major role to play in science, for example, in sequencing the genetic code of dangerous viruses or finding patterns in vast sets of environmental measurements, is beyond question. What is at issue is whether humanity should allow AI to be pushed at us as if it were a magic solution to all our needs and problems.
AI is a form of media – a technologically based way of mediating our relations with the world – and myth-making about new media is certainly nothing new. But AI is a complex case, because of the hype encoded in its very name and conception. AI, as Evgeny Morozov recently argued, is neither fully artificial nor reliably intelligent.
Transpose this point into the much longer debate about knowledge as a tool of global power since the beginning of historic colonialism just over five centuries ago, and the discourse around AI by business, including the businesses that are universities, takes on another, more disturbing, aspect.
Data colonialism
I approach this question from the perspective of the framework of data colonialism, which I have developed over recent years with Ulises Mejias. There is no space here to outline this theory in detail.
Suffice to say, our core idea is that the drive in recent decades to extract data continuously from everything, including every dimension of social and natural life, is a feature not just of contemporary capitalism, but a new stage in the evolution of colonialism.
Whereas historical colonialism seized land and the resources (human or otherwise) needed to exploit it, the new data colonialism takes life itself, extracting value from it in the form of data that can be sold by, or just stored within, corporations and governments.
That colonialism should, in the 21st century, take on this new form as part of capitalism’s continuing expansion, seems less strange when we remember that capitalism itself first emerged in the 18th century from the profits of historical colonialism’s huge asset grab two centuries earlier.
For sure, the concept of data colonialism remains controversial, but suppose you grant it as a possibility. Then AI and the discourse of Big Data appear in a very different light. They can be seen as an account of knowledge that justifies and legitimates the endless extraction of data from life by business and government – power that is very largely located in key centres in the Global North.
Once again, we are not objecting to the use of AI tools to solve specific problems within clear parameters that are set and monitored by actual social communities. We are objecting to the rhetoric and expansionist practice of offering AI as the solution for everything, a solution whose inevitable precondition is humanity offering up its lives for data extraction.
Indeed, fashionable AI projects like ChatGPT can be understood as directly colonial, because they depend on treating the whole of humanity’s cultural production to date as their free input, as author and artist James Bridle has recently argued.
Universities can fight back
Grant this possibility, and it is clear that universities, which until now have depended on a different human-led model of knowledge, can become important sites of resistance to this next colonial phase of knowledge production.
While Big Tech companies are its beneficiaries, the university culture of face-to-face knowledge production is potentially a big loser from AI, except for members of the coding elite who write and implement AI programmes.
Staying loyal to this possibility of resistance within the university, Ulises Mejias and I, with our Mexican colleague Paola Ricaurte, founded nearly three years ago a network of activists and scholars called Tierra Común. It operates in three languages – English, Spanish and Portuguese – with a special but not exclusive focus on Latin America.
Its goal is to further resistance to data colonialism by supporting community-led practices of resistance based on alternative visions of knowledge production in society.
Although the pandemic interrupted our work, we met physically for the first time in Mexico City in December 2022. Our goal is to build bridges between academic institutions and activist practice, listening closely to activist agendas and frameworks, and sharing our own as freely and openly as possible.
This is not, of course, the first time that universities have opened up their work to wider audiences. We follow in a long tradition of similar work, not least in Latin America by philosophers Paulo Freire, Ivan Illich and others.
But it seems particularly important to renew this tradition at a time when a very different model of information and knowledge – artificial intelligence – is vying for dominance. In this context, for sure, writing in academia’s traditional formats is not enough.
Nick Couldry is professor of media, communications and social theory in the department of media and communications at the London School of Economics and Political Science in the United Kingdom.
0 Comments