Acabo de regresar de un viaje muy lindo a Argentina, donde fui invitada por el Instituto francés para participar en varios eventos.
El 12 de mayo, participé en la conferencia “Manipulación Informativa e Injerencia Extranjera: Desafíos Globales y Respuestas Democráticas”, organizada por la Delegación de la Unión Europea en Argentina y por varias embajadas (como la de Francia). En el panel “Cómo contrarrestar la desinformación respetando la libertad de expresión y el derecho a la información”, hablé de cómo la desinformación se financia a través del mercado publicitario que sustenta toda la web, y de la necesidad de regular este mercado. También destaqué la importancia de fortalecer la investigación científica sobre este tema – como en el proyecto AI4TRUST, financiado por la propia UE.
Al día siguiente, tuve dos encuentros con estudiantes de periodismo, en la Universidad Nacional de Avellaneda y en la Universidad Abierta Interamericana, también sobre temas relacionados con la desinformación en internet. Fue un honor y un placer ver, en cada caso, la sala llena y mucho interés. Los/as estudiantes hicieron muchas preguntas y demostraron una gran disposición a aprender y progresar.
Los días 16 y 17 de mayo participé en tres paneles organizados en el marco de la Noche de las Ideas, una iniciativa del Instituto francés que se organiza todos los años y esta vez tuvo lugar en el famoso Teatro Colón de Buenos Aires. Fui en la sesión inaugural sobre el tema de este año, “El poder de actuar”. Además, participé como ponente en un debate muy interesante sobre el trabajo en plataformas digitales que se titulaba provocativamente ¿Nuevas servidumbres voluntarias? Jóvenes y precariedad y en otro sobre la inteligencia artificial: “In A.I. we trust?”. Actuar con y en contra de las nuevas tecnologías.
El 20 de mayo, di una charla sobre “El futuro del trabajo y la IA” como parte del ciclo UBA Digital, en la Universidad de Buenos Aires. Presenté algunos resultados de mi investigación sobre el trabajo digital y su papel en la producción de IA, desarrollada en el marco del programa de investigación DiPLab. Una vez más, me alegró ver a muchos participantes con preguntas muy interesantes. Fuimos acogidos por la facultad de odontología y también tuvimos la extraordinaria oportunidad de visitar la clínica.
Workers in Venezuela are powering AI production, often under tough conditions. Sanctions and a deep political-economic crisis have pushed them to work for platforms that pay in US dollars, albeit at low rates. They constitute a large reservoir for technology producers from rich countries. But they are not passive players.
They build resilience, rework their environment, and sometimes engage in acts of resistance, with support from different segments of their personal networks. From strong local ties to loose online connections, these informal webs help them cope, adapt, and occasionally push back. Their diversified relationships comprise an unofficial and often hidden, albeit largely digitised relational infrastructure that sustains their work and shapes collective action.
These findings invite to rethink agency as embedded in workers’ personal networks. To respond to adversities, one must liaise with equally affected peers, with family and friends who offer support, etc. Social ties ultimately determine who is enabled to respond, and who is not; whether any benefits and costs are shared, and with whom; whether any solution will be conflictual or peaceful. Social networks are not accessory but constitute the very channel through which Venezuelan data workers cope with hardship.
Not all relationships play the same role, though. Venezuelans discover online data work through their strong ties with family, close friends, and neighbours. To convert their online earnings into local currency, they rely on their broader social networks of relatives and friends living abroad and indirect relationships with intermediaries. For managing their day-to-day activities, Venezuelans expand their social networks through online services like Facebook, WhatsApp, and Telegram, connecting with diverse and less-close peers within and outside the country. Different social ties affect the various stages of the data working experience.
Overall, no Venezuelan could work alone – and the networked interactions that sustain each of them against hardship have made them massively present, as ‘uninvited protagonists,’ in international platforms. Their massive presence in the planetary data-tasking market is a supply rather than demand-driven phenomenon.
This analysis also sheds light on the reasons why mobilisation is uncommon among platform data workers. Other studies noted diverging orientations of workers, unclear goals, lack of focus, and insufficient leadership. Another powerful reason hinges upon the predominance of weak ties in building up online group membership: indeed, distant acquaintances are insufficient to prompt people to action if their intrinsic motivations are low.
The conference reaches Italy this year. It will take place in the most ancient University in the western world, Bologna, on 10-12 September 2025.
The overarching topic of this year’s conference is ‘Contesting Digital Labor: Resistance, counteruses, and new directions for research’. The goal is to explore how platform workers navigate, challenge, and reshape algorithmic management systems while forging innovative forms of solidarity and collective action. We also aim to explore the perspectives that technological developments open for workers in order to escape everyday surveillance, to resist top-down control and to organise to defend their rights.
In addition to presentations that directly address these questions, we welcome proposals that analyse a broader range of issues related to digital labour.
Another article has just been published! Another one that is based on a DiPLab-based group collaboration (with A.A. Casilli, M. Fernández Massi, J. Longo, J. Torres Cierpe and M. Viana Braz) and that uses data from multiple countries. It is entitled ‘The digital labour of artificial intelligence in Latin America: a comparison of Argentina, Brazil, and Venezuela’ and is part of a special issue of Globalizations on ‘The Political Economy of AI in Latin America’. This article lifts the veil on the precarious and low-paid data workers who, from Latin America, engage in AI preparation, verification, and impersonation, often for foreign technology producers. Focusing on three countries (Argentina, Brazil, and Venezuela), we use original mixed-method data to compare and contrast these cases in order to reveal common patterns and expose the specificities that distinguish the region.
The analysis unveils the central place of Latin America in the provision of data work. To bring costs down, AI production thrives on countries’ economic hardship and inequalities. In Venezuela and to a lesser extent Argentina, acute economic crisis fuels competition and favours the emergence of ‘elite’ (young and STEM-educated) data workers, while in more stable but very unequal Brazil, this activity is left to relatively underprivileged segments of the workforce. AI data work also redefines these inequalities insofar as, in all three countries, it blends with the historically prevalent informal economy, with workers frequently shifting between the two. There are spillovers into other sectors, with variations depending on country and context, which tie informality to inequality.
Our study has policy implications at global and local levels. Globally, it calls for more attention to the conditions of AI production, especially workers’ rights and pay. Locally, it advocates solutions for the recognition of skills and experience of data workers, in ways that may support their further professional development and trajectories, possibly also facilitating some initial forms of worker organization.
The version of record is here, while an open-access preprint is available here.
I am thrilled to announce that an important article has just seen the light. Entitled ‘Where does AI come from? A global case study across Europe, Africa, and Latin America’, it is part of a special issue of New Political Economy on ‘Power relations in the digital economy‘. It is the result of joint work that I have done with members of the Diplab team (A.A. Casilli, M. Cornet, C. Le Ludec and J. Torres Cierpe) on the organisational and geographical forces underpinning the supply chains of artificial intelligence (AI). Where and how do AI producers recruit workers to perform data annotation and other essential, albeit lower-level supporting tasks to feed machine-learning algorithms? The literature reports a variety of organisational forms, but the reasons of these differences and the ways data work dovetails with local economies have remained for long under-researched. This article does precisely this, clarifying the structure and organisation of these supply chains, and highlighting their impacts on labour conditions and remunerations.
Framing AI as an instance of the outsourcing and offshoring trends already observed in other globalised industries, we conduct a global case study of the digitally enabled organisation of data work in France, Madagascar, and Venezuela. We show that the AI supply chains procure data work via a mix of arm’s length contracts through marketplace-like platforms, and of embedded firm-like structures that offer greater stability but less flexibility, with multiple intermediate arrangements that give different roles to platforms. Each solution suits specific types and purposes of data work in AI preparation, verification, and impersonation. While all forms reproduce well-known patterns of exclusion that harm externalised workers especially in the Global South, disadvantage manifests unevenly depending on the structure of the supply chains, with repercussions on remunerations, job security, and working conditions.
Marketplace- and firm-like platforms in the supply chains for data work in Europe, Africa, and Latin America. Dark grey countries: main case studies, light grey countries: comparison cases. Organisational modes range from almost totally marketplace oriented (darker rectangle, Venezuela) to almost entirely firm oriented (lighter rectangle, Madagascar). AI preparation (darker circle) is ubiquitous, but AI verification (darker triangle) and AI impersonation (darker star) tend to happen in ‘deep labour’ and firm-like organisations where embeddedness is higher.
We conclude that responses based only on worker reclassification, as attempted in some countries especially in the Global North, are insufficient. Rather, we advocate a policy mix at both national and supra-national levels, also including appropriate regulation of technology and innovation, and promotion of suitable strategies for economic development.
The Version of record is here, while here is an open access preprint.
My great regret is that I always have very little time to write posts, and the emptiness of this blog does not reflect the numerous, great and stimulating scientific events and opportunities that I have enjoyed throughout 2024. As a last-minute remedy (with a promise to do better next year…hopefully), I try to summarize the landmarks here, month by month.
In January, I launched the Voices from Online Labour (VOLI) project, which I coordinate with a grant of about €570,000 from the French National Agency for Research. This four-year initiative brings together expertise from sociology, linguistics, and AI technology across multiple institutions, including four French research centres, a speech technology company, and three international partners.
In February with the Diplab team, I spent two exciting days at the European Parliament in Brussels, engaging in profound discussions with and about platform workers as part of the 4th edition of the Transnational Forum on Alternatives to Uberization. I chaired a panel with data workers and content moderators from Europe and beyond, aiming to raise awareness about the difficult working conditions of those who fuel artificial intelligence and ensure safe participation to social media.
In March, three publications saw the light. One is a solo-authored chapter, in French, on ‘Algorithmes, inégalités, et les humains dans la boucle‘ (Algorithms, inequalities, and the humans in the loop) in a collective book entitled ‘Ce qui échappe à l’intelligence artificielle‘ (What AI cannot do). The other two are journal articles that may seem a little less close to my ‘usual’ topics, but they are important because they constitute experiments in research-informed teaching. One is a study of the 15-minute city concept applied to Paris, realized in collaboration with a colleague, S. Berkemer of Ecole Polytechnique, and a team of brilliant ENSAE students. The other is an analysis of the penetration of AI into a specific field of research, neuroscience, showing that for all its alleged potential, it created a confined subfield but did not entirely disrupt the discipline. The study, part of a larger project on AI in science, was part of the PhD research of S. Fontaine (who has now got his degree!), also co-authored with his co-supervisors F. Gargiulo and M. Dubois.
In April, I co-published the final report from the study realized for the European Parliament, ‘Who Trains the Data for European Artificial Intelligence?‘. Despite massive offshoring of data tasks to lower-income countries in the Global South, we find that there are still data workers in Europe. They often live in countries where standard labour markets are weaker, like Portugal, Italy and Spain; in more dynamic countries like Germany and France, they are often immigrants. They do data work because they lack sufficiently good alternative opportunities, although most of them are young and highly educated.
I then attended two very relevant events. On 30 April-1 May, I was at a Workshop on Driving Adoption of Worker-Centric Data Enrichment Guidelines and Principles, organised by Partnership on AI (PAI) and Fairwork in New York city to bring together representatives of AI companies, data vendors and platforms, and researchers. The goal was to discuss options to improve working conditions from the side of the employers and intermediaries. On 28 May, I was in Cairo, Egypt, to attend the very first conference of the Middle East and Africa chapter of INDL (International Network on Digital Labour), the research network I co-founded. It was a fantastic opportunity to start opening the network to countries that were less present before, and whose voices we would like to hear more.
August is a quieter month (but I greatly enjoyed a session at the Paralympics in Paris!), so I’ll jump to September. Lots of activities: a trip to Cambridge, UK, and a workshop on disinformation at the Minderoo Centre for Technology and Democracy; a workshop on Invisible Labour at Copenhagen Business School in Denmark; and a one-day conference on gender in the platform economy in Paris. Another publication came out: a journal article, in Spanish, on Argentinean platform data workers.
At the end of October, and until mid-November, I travelled to Chile for the seventh conference of the International Network on Digital Labour (INDL-7), which I co-organised. It was an immensely rewarding experience. I took the opportunity to strengthen my linkages and collaborations with colleagues there. It was a very intense, and super-exciting, time: after INDL-7 (28-30 October), I spent a week in Buenos Aires, Argentina, where I co-presented work in progress at the XV Jornadas de Estudios Sociales de la Economía, UNSAM. I then returned to Chile where I gave a keynote at the XI COES International Conference in Viña del Mar, Chile, on 8 November, and another at the ENEFA conference in Valdivia (Chile) on 14 November. I also gave a talk as part of the ChiSocNet series of seminars in Santiago, 11 November.
Within the Horizon-Europe project AI4TRUST, we published a first report presenting the state of the art in the socio-contextual basis for disinformation, relying on a broad review of extant literature, of which the below is a synthesis.
What is disinformation?
Recent literature distinguishes three forms:
‘misinformation’ (inaccurate information unwittingly produced or reproduced)
‘disinformation’ (erroneous, fabricated, or misleading information that is intentionally shared and may cause individual or social harm)
‘malinformation’ (accurate information deliberately misused with malicious or harmful intent).
Two consequences derive from this insight. First, the expression ‘fake news’ is unhelpful: problematic contents are not just news, and are not always false. Second, research efforts limited to identifying incorrect information alone, without capturing intent, may miss some of the key social processes surrounding the emergence and spread of problematic contents.
How does mis/dis/malinformation spread?
Recent literature often describes the characteristics of the process of diffusion of mis/dis/malinformation in terms of ‘cascades’, that is, the iterative propagation of content from one actor to others in a tree-like fashion, sometimes with consideration of temporality and geographical reach. There is evidence that network structures may facilitate or hinder propagation, regardless of the characteristics of individuals: therefore, relationships and interactions constitute an essential object of study to understand how problematic contents spread. Instead, the actual offline impact of online disinformation (for example, the extent to which online campaigns may have inflected electoral outcomes) is disputed. Likewise, evidence on the capacity of mis/dis/malinformation to spread across countries is mixed. A promising perspective to move forwards relies on hybrid approaches mixing network and content analysis (‘socio-semantic networks’).
What incentivizes mis/dis/malinformation?
Mis/dis/malinformation campaigns are not always driven solely by political tensions and may also be the product of economic interest. There may be incentives to produce or share problematic information, insofar as the business model of the internet confers value upon contents that attract attention, regardless of their veracity or quality. A growing, shadow market of paid ‘like’, ‘share’ and ‘follow’ inflates the rankings and reputation scores of web pages and social media profiles, and it may ultimately mislead search engines. Thus, online metrics derived from users’ ratings should be interpreted with caution. Research should also be mindful that high-profile disinformation campaigns are only the tip of the iceberg, low-stake cases being far more frequent and difficult to detect.
Who spreads mis/dis/malinformation?
Spreaders of mis/dis/malinformation may be bots or human users, the former being increasingly controlled by social media companies. Not all humans are equally likely to play this role, though, and the literature highlights ‘super-spreaders’, particularly successful at sharing popular albeit implausible contents, and clusters of spreaders – both detectable in data with social network analysis techniques.
How is mis/dis/malinformation adopted?
Adoption of mis/dis/malinformation should not be taken for granted and depends on cognitive and psychological factors at individual and group levels, as well as on network structures. Actors use ‘appropriateness judgments’ to give meaning to information and elaborate it interactively with their networks. Judgments depend on people’s identification to reference groups, recognition of authorities, and alignment with priority norms. Adoption can thus be hypothesised to increase when judgments are similar and signalled as such in communication networks. Future research could target such signals to help users in their contextualization and interpretation of the phenomena described.
Multiple examples of research in social network analysis can help develop a model of the emergence and development of appropriateness judgements. Homophily and social influence theories help conceptualise the role of inter-individual similarities, the dynamics of diffusion in networks sheds light on temporal patterns, and analyses of heterogeneous networks illuminate our understanding of interactions. Overall, social network analysis combined with content analysis can help research identify indicators of coordinated malicious behaviour, either structural or dynamic.
I had the privilege and pleasure to visit Madagascar in the last two weeks. I had an invitation from Institut Français where I participated in a very interesting panel on “How can Madagascar help us rethink artificial intelligence more ethically?”, with Antonio A. Casilli, Jeremy Ranjatoelina et Manovosoa Rakotovao. I also conducted exploratory fieldwork by visiting a sample of technology companies, as well as journalists and associations interested in the topic.
A former French colony, Madagascar participates in the global trend toward outsourcing / offshoring which has shaped the world economy in the past two decades. The country harnesses its cultural and linguistic heritage (about one quarter of the population still speak French, often as a second language) to develop services for clients mostly based in France. In particular, it is a net exporter of computing services – still a small-sized sector, but with growing economic value.
Last year, a team of colleagues has already conducted extensive research with Madagascan companies that provide micro-work and data annotation services for French producers of artificial intelligence (and of other digital services). Some interesting results of their research are available here. This time, we are trying to take a broader look at the sector and include a wider variety of computing services, also trying to trace higher-value-added activities (like computer programming, website design, and even AI development).
It is too early to present any results, but the big question so far is the sustainability of this model and the extent to which it can push Madagascar higher up in the global technology value chain. Annotation and other lower-level services create much-needed jobs in a sluggish economy with widespread poverty and a lot of informality; however, these jobs attract low recognition and comparatively low pay, and have failed so far to offer bridges toward more stable or rewarding career paths. More qualified computing jobs are better paid and protected, but turnover is high and (national and international) competition is tough.
At policy level, more attention should be brought to the quality of these jobs and their longer-term stability, while client tech companies in France and other Global North countries should take more responsibility over working conditions throughout their international supply chains.
Most of my current research aims to unpack artificial intelligence (AI) from the viewpoint of its commercial production, looking in particular at the human resources needed to prepare the data it needs – whence my studies on the data work and annotation market. However, for once, I am focusing on AI as a set of scientific theories and tools, regardless of their market positioning; indeed, I have joined a team of science-of-science specialists to study the disciplinary origins and subsequent spread of AI over time.
In a newly published, open-acces article, we unveil the disciplinary composition of AI, and the links between its various sub-fields. We question a common distinction between ‘native’ and ‘applicative’ disciplines, whereby only the former (typically confined to statistics, mathematics, and computer science) produce foundational algorithms and theorems for AI. In fact, we find that the origins of the field are rather multi-disciplinary and benefit, among others, from insights from cognitive science, psychology, and philosophy. These intersecting contributions were most evident in the historical practices commonly known as ‘symbolic systems’. Later, different scientific fields have become, in turn, the central originating domains and applicators of AI knowledge, for example operations research, which was for a long time one of the core actors of AI applications related to expert systems.
While the notion of statistics, mathematics and computer science as native disciplines has become more relevant in recent times, the spread of AI throughout the scientific ecosystem is uneven. In particular, only a small number of AI tools, such as dimensionality reduction techniques, are widely adopted (for example, variants of these techniques have been in use in sociology for decades). But if transfer of AI is largely ascribable to multi-disciplinary interactions, very few of them exist. We observe very limited collaborations between researchers in disciplines that create AI and researchers in disciplines that only (or mainly) apply AI. The small core of multi-disciplinary champions who interact with both sides, and the presence of a few multi-disciplinary journals, sustains the whole system.
Inter- and multi-disciplinary interactions are essential for AI to thrive and to adequately support scientific research in all fields, but disciplinary boundaries are notoriously hard to break. Strategies to better reward inter-disciplinary training, publications, and careers, are thus essential. Of course the potential for AI to significantly advance knowledge is still (largely) to be proven, and there have been disappointing experiences with, for example, the comparatively limited effectiveness of these tools in research on Covid-19. In all cases, the status quo is not ideal, and important steps forward are now needed.
We establish these results by analyzing a large corpus of scientific papers published between 1970 and 2017, extracted from Microsoft Academic Graph through the AI keywords used by the authors, and explored with different relational structures among the scientometric data (keyword co-occurrence network, authors’ collaboration network).
Full citation: Floriana Gargiulo, Sylvain Fontaine, Michel Dubois, Paola Tubaro. A meso-scale cartography of the AI ecosystem. Quantitative Science Studies, 2023; doi: https://doi.org/10.1162/qss_a_00267
AI is not just a Silicon Valley dream. It relies among other things, on inputs from human workers who generate and annotate data for machine learning. They record their voice to augment speech datasets, transcribe receipts to provide examples to OCR software, tag objects in photographs to train computer vision algorithms, and so on. They also check algorithmic outputs, for example, by noting whether the outputs of a search engine meet users’ queries. Occasionally, they take the place of failing automation, for example when content moderation software is not subtle enough to distinguish whether some image or video is appropriate. AI producers outsource these so-called “micro-tasks” via international digital labor platforms, who often recruit workers in Global-South countries, where labor costs are lower. Pay is by piecework, without any no long-term commitment and without any social-security scheme or labor protection.
In a just-published report co-authored with Matheus Viana Braz and Antonio A. Casilli, as part of the research program DiPlab, we lifted the curtain on micro-workers in Brazil, a country with a huge, growing, and yet largely unexplored reservoir of AI workers.
We found among other things that:
Three out of five Brazilian data workers are women, while in most other previously-surveyed countries, women are a minority (one in three or less in ILO data).
9 reais (1.73 euros) per hour is the average amount earned on platforms.
There are at least 54 micro-working platforms operating in Brazil.
One third of Brazilian micro-workers have no other source of income, and depend on microworking platforms for subsistence.
Two out of five Brazilian data workers are (apart from this activity) unemployed, without professional activity, or in informality. In Brazil, platform microwork arises out of widespread unemployment and informalization of work.
Three out of five of data workers have completed undergraduate education, although they mostly do repetitive and unchallenging online data tasks, suggesting some form of skill mismatch.
The worst microtasks involve moderation of violent and pornographic contents on social media, as well as data training in tasks that workers may find uncomfortable or weird, such as taking pictures of dog poop in domestic environments to train data for “vacuuming robots”.
Workers’ main grievances are linked to uncertainty, lack of transparency, job insecurity, fatigue and lack of social interaction on platforms.