“The big question – regarding misinformation- is whether the actions taken by social media companies are working,” – Lisa-Maria Neudert, Co-Principal Investigator at the Computational Propaganda project, Oxford Internet Institute.
The phenomenon of junk news and its dissemination over social media platforms have transformed political debates in Europe. Civitates supports the Oxford Internet Institute, which is currently producing a series of research related to misinformation about the Covid-19 pandemic.
What is the aim of this series?
We have been researching misinformation for some time now and the moment when Covid-19 entered into our lives, we saw a huge amount of misinformation popping up. Communities and actors that often push junk news and conspiracy theories online, became more active. The anti-vax communities for instance, but also state sponsored government outlets from China, Iran, Russia and Turkey are pushing things over the internet. Next to that a lot of prominent conspiracy theorists on YouTube are getting active. At the Oxford Internet Institute, we are currently conducting a couple of studies about what misinformation is out there, how prominent it is and how many users engage with it. The studies are aimed as a proxy for what kind of impact misinformation has and how it resonates.
How big is this problem exactly?
There is a publication on the topic from our partner, The Reuters Institute for the Study of Journalism, to which one of our researchers contributed as well. The publication states that more than half (59%) of the Twitter posts rated false about the coronavirus pandemic remains on Twitter without any warning label. Thereby, 27% of fake news remained on YouTube and 24% on Facebook.
Do you think there is more that Twitter, Facebook and YouTube could do to prevent misinformation from spreading on their channels?
I think the question is really how effective it is what they are doing right now: many of the measures that Facebook, Twitter and Google are taking are the right measures. I think we have seen more courage from these companies, in terms of what they are taking down and flagging, than ever before. Usually they take a very careful approach to content moderation but now when it comes to Covid-19, we have seen these companies react more quickly and more forcefully than with regard to any other topics in the past. But I think the big question is: whether what they are doing is working. For instance, when it comes to political advertising or issue advertising in this case, Facebook, for example, is blocking all sorts of ads that have to do with the sale of masks for commercial purposes, but many of these ads are still showing up. When asked for comments, Facebook said that they are working with automatic systems to catch those ads but that the systems are not perfect yet and stuff falls through the gaps. So, I think it’s a question of the right measures and whether these are being enforced in an efficient way.
What are some of the findings from your research?
As a result of one of the researches we carried out, we published a memo that lifted the lid on the actions of English language state-backed media in Russia, China, Iran and Turkey during the coronavirus pandemic. While these are English language media, they are really targeting a global audience. For example, the Chinese pages that we checked, are not domestic in any way, but meant to spread misinformation internationally. They have audiences that go up into hundreds of millions and are performing really well on social media pushing positive messages about their governments’ handling of the crisis, while being highly critical of countries in the West. These social media posts can sometimes be far more effective in engaging online users than more traditional news outlets do.
The question is: can we do something about it? There is everything from fact checking to taking down misinformation. The problem is how these stories are presented. They may be highly biased, highly suggestive, but are, for instance, presented as opinions and this is protected under the freedom of speech.
Why do conspiracies about the virus spread so easily?
Currently there are still a lot of open questions about the virus; about the spread of the virus and where it is coming from. There is a pressing need for answers. Reputable and credible experts are holding back, saying that we don’t have all the facts yet and that answering these questions will take time and effort. Then there are publications from conspiracy theorists and junk news outlets, who are trying to provide answers to these questions that are not evidence based. As there is such a big public urgency for answers, these conspiracy theories are doing really well now.
How do you see the future information about Covid-19 in the online space?
I think that our research has shown how widespread distrust is in the main stream media. It’s not just a couple of people who are not super media literate that believe these conspiracy theories. The pandemic is a very scary public health emergency, everyone has seen impact on their lives. It is clear that people are struggling with navigating the information about it. There is much work to be done, not just in terms of combatting misinformation, but also in terms of enhancing media trust and how we can share expert information and complicated science in an interesting way.
The Computational Propaganda Project (COMPROP), based in the Oxford Internet Institute and University of Oxford, involves an interdisciplinary team of social and information scientists researching how political actors manipulate public opinion over social networks. This work includes analysing how the interaction of algorithms, automation, politics, and social media amplifies or represses political content, misinformation, hate speech, and junk news.