Earlier this year, Dr Paul Bouchaud was digging for leads online – taking “a random digital walk” as they put it. Bouchaud is a researcher at Civitates’ grantee partner AI Forensics, an organisation which describe themselves as “digital detectives who shine a light on hidden algorithmic injustices, and [who] work to bring accountability and transparency to the tech industry.”
In 2024, a similar exploratory exercise led Bouchaud to uncover how Russian propaganda was flooding Europe’s social media networks. This time Bouchaud’s online foray unearthed a different – though equally troubling – way in which major online platforms allow peoples’ lives to be invaded to try to shape their views and choices.
In the EU, the Digital Services Act (DSA) gives the public the right to view details of advertisers who target them online. Bouchaud decided to exercise this right by exporting their personal data from X (formerly Twitter).
Alarm bells soon rang.
“I found that I was targeted by an ad which inferred I was far right. I thought, ‘How’s this possible?’” Bouchaud, who had been researching political extremism as part of their work, discovered that X allowed advertisers to target users based on sensitive personal data: “This is a clear violation of the DSA and the GDPR [the EU’s General Data Protection Regulation]. You cannot target advertising based on sensitive data like peoples’ health conditions, sexual orientation or political opinions.”
Unsettling findings
Deeper research revealed the scale of the problem: there were countless disturbing examples among the thousands of adverts AI Forensics found illegally targeted users.
For example, the Saudi Arabian Public Investment Fund, controlled by the country’s de facto ruler Mohammed bin Salman, excluded users based on ethnic origin, faith and sexual orientation. Fossil fuel giant TotalEnergies deployed a vast array of keywords to omit users who had engaged with environmentalists and ecological organisations. People adjudged to be interested in ‘Nazismus’ [Nazism] and the word ‘#lesbisch’ [lesbian] were among those who the multinational Dell Technologies chose to omit from seeing their ads. And global fast-food chain McDonalds ran adverts excluding X users who had used keywords related to antidepressants and suicide. The list of unsettling examples went on and on.
AI Forensics published their findings and launched a tool enabling the public to discover whether they had been targeted in this way.
Then, along with fellow Civitates’ grantee partners European Digital Rights (EDRi), Panoptykon Foundation, Stichting Bits of Freedom and VoxPublic, as well as four other civil society organisations, AI Forensics lodged a formal complaint with the European Commission and the relevant national Digital Service Coordinators responsible for enforcing the DSA. They called on the regulators to investigate X for breaching the DSA. If found guilty of doing so, the company could face a significant fine, up to 6% of the company’s global turnover, and being ordered to take measures to address the breach by a specific deadline
Their accompanying statement alleged that the “discriminatory or exploitative profiling” which AI Forensics had discovered “opens the door for a myriad of abuses at scale”.
Fueling conflict
“The complaint strikes at the heart of the problem of social media today,” says Erika Campelo, national delegate at VoxPublic, the French civil society organisation supporting citizens’ initiatives against discrimination, corruption and injustice.
“We’ve seen the role of social media networks change: witnessing the rise of racism, homophobia, online hate and general discrimination on platforms. AI Forensics’ research shows very personal data being used for deep profiling, which can foster more discrimination and is very dangerous.”
In France, the complaint will be one of the first to be brought before the country’s Digital Service Coordinator (Arcom). Online platforms are a major frontier in the battle for human rights and democracy: this case has implications for both, says Campelo’s colleague Thomas Renaux.
“X and other social media platforms’ economic models rely on clicks – and the more violent, controversial and conflictual content gets more engagement and makes more money. It’s a problem for democracy, as it promotes regressive political forces pushing for more discriminatory policies,” he says.
VoxPublic is working closely with digital specialists such as AI Forensics to try to force social media companies to tackle the abuses their platforms fuel.
The case against X for exploiting its users personal data is a striking example of how different civil society organisations – from those specialising in human rights and democracy, to those with tech expertise – working at national and EU level are building alliances to hold the powerful to account.