EC’s Guidance to Strengthening the Code of Practice on Disinformation: A Mis-take with Mis-information?

By Iva Nenadić, Research Fellow at the Centre for Media Pluralism and Freedom

At the European Union policy level, disinformation has been widely discussed and considered as one of the prime problems European democracies are facing. Currently, the key instrument of the EU approach to tackling this problem is the Code of Practice on online disinformation – presented as self-regulation by the leading online platforms, advertisers and advertising industry; in a process initiated and guided by the European Commission.

The Code was first brought to light in autumn 2018, as a unique mechanism with the potential to expand globally and to include other relevant actors. However, after the initial period of its implementation, it became clear that the form, scope, and implementation regime of the Code need to be revised to allow for appropriate monitoring of its effectiveness. The review of the initial Code, conducted by the European Regulators Group for Audiovisual Media Services (ERGA, 2020), and the Commission itself (Staff Working Document (SWD(2020)180), highlighted a set of significant deficiencies, including a lack of clear and common definitions of the key concepts; difficulties in understanding the activities implemented and their potential impact, especially as functional data access was not provided by platforms, and the key performance indicators were not established as to adequately gauge achievements.

Now, the EC is calling on the signatories to “reinforce” the Code by strengthening it along several dimensions, and announcing its evolution towards a co-regulatory instrument in line with the proposed Digital Services Act.

With this post, I would like to share some initial thoughts on one of the suggestions featured in the 2021 Commission Guidance on Strengthening the Code of Practice on Disinformation: specifically, expanding the scope of commitments to include actions to reduce the risks of misinformation, further to those posed by disinformation. 

The Guidance calls for “stronger and more specific commitments” in all areas of the Code and suggests that the signatories should set up a mechanism for its regular adaptation to keep the Code a “living instrument” able to respond to new and emerging risks. Following the experience with misleading and harmful information around the COVID-19 pandemic, the Commission suggests that, further to disinformation in the narrow sense (false or misleading content spread with an intention to deceive or secure economic or political gain and which may cause public harm), the Code should also tackle misinformation (false or misleading information spread without a malicious intent but the effects can be still harmful) “when there is a significant public harm dimension”.

While misinformation indeed can be problematic, especially when viral and in high-intensity events, it is not clear how such expansion of the Code’s scope could be developed without posing threats to freedom of expression and information pluralism. It is already difficult to achieve an agreement (even if just within the EU) over what kinds of contents and behaviours constitute disinformation, as interpretations of what contents are problematic, harmful, unacceptable, or even illegal, depend largely on specific political and cultural context and legal traditions of different countries. To date, the experience with the Code of Practice has also shown that neither signatories themselves (here referring mainly to major platforms) have been able or interested in achieving a common understanding of disinformation – for different reasons, including a variety of their services and business models.

Misinformation can often overlap with disinformation in terms of contents, and they can build on each other. The key differentiation is to be found in the existence of intent, but it is hard to imagine how platforms and other Code signatories could determine, in all possible cases, whether there is or no intention to deceive or to do any other harm with the spread of content that is false or misleading. Furthermore, it is not clear who should and how to establish whether “there is a significant public harm dimension”.

Individuals may share misleading or false content believing it to be true, and not realising that they themselves have become victims or agents of disinformation campaigns. And sharing is not the only action they have on disposal to (un)intentionally increase the visibility and reach of mis/disinformation. They can also do it by liking, commenting, or just by spending time with it.

When suggesting that in the revised Code signatories should commit to “take proportionate actions to mitigate the risks posed by misinformation”, the Commission also asks for users to be “empowered to contrast this information with authoritative sources and be informed where the information they are seeing is verifiably false”. While in the pandemic it was somewhat logical to establish the World Health Organisation and national health authorities as “authoritative sources” of information around COVID-19, the question arises who and based on what criteria should decide such sources (and so in all possible countries, incl. their local levels) when public interest is not concentrated around one, but is dispersed to countless topics?

Another problem related to asking the signatories of the Code (again referring primarily to the platforms) “to have in place appropriate policies and take proportionate actions to mitigate the risks posed by misinformation” is that this may, in some cases, even if in just a few, result in platforms deciding the boundaries of media freedom – or platforms regulating media and journalistic mistakes instead of media and journalists setting the record straight with their audiences. Even if “it is not an aim of the strengthened Code to evaluate the veracity of editorial content”, it may still happen if such measures are implemented under the pillars of the Code, especially considering how broad the concept of “editorial content” is and our inability to define “media” in the digital age. Journalists work under enormous time pressure and, as humans, can make mistakes and get things wrong. As their work easily (and intentionally) gets shared on online platforms, and as legacy media brands are still more trusted (Standard Eurobarometer 94, Media in use) than other emerging sources, they can misinform large audiences. But, is it a good idea to have platforms (even indirectly) acting on media errors, poor journalism, factual mistakes in journalistic pieces, oversimplified stories, and misleading headlines or clickbait (which all may be characterised as misinformation)? Platforms have already censored journalistic content for violating its standards: e.g. removing an iconic and Pulitzer winning image of a naked, 9-year-old girl fleeing napalm bombs during the Vietnam War, as a nudity and child pornography; and blocking various anti-fascist Facebook pages with “editorial content” in the ex-Yugoslav region for including swastikas or Nazi salute, while not understanding (due to local languages) that the text accompanying such photos actually warned on the dangers of raging nationalism in the region. This is to name just a few examples where platforms got things wrong.

As shown by Tsfati et al. (2020) in their review and synthesis of the literature exploring the role of mainstream news media in the dissemination of inaccurate and misleading information, news media in fact can play a significant role in such misinformation. Further to unintentional mistakes, click-baits, and poor journalism, circumstantial evidence presented by Tsfati et al. (2020) suggests that more people learn about fake news stories from mainstream media than from social media. This is in line with Claire Wardle’s Trumpet of Amplification, showing the journey that disinformation often takes – from the anonymous web and closed groups to the professional media who provide it with the most oxygen in terms of reach and impact. Deliberate falsehoods and conspiracies can sneak into the mainstream media because of the lack of verification, but even when the media cover such stories in order to debunk them, parts of the audience can still retain the wrong information due to their selective exposure, these studies suggest. In any case, as journalists use social media as sources of information but also as platforms for distribution or marketing their work, acting on misinformation is particularly sensitive and it should not be done by platforms only. Yet, the media organisations or associations of journalists are not part of the current Code of practice (as its signatories or in other active form), nor has, at the moment, their role in such a way been envisaged in the proposal that seeks to strengthen this instrument in tackling misleading and harmful content online.