Countering “fake news”

Waves of parliament´s around the world are looking into the problem of “fake news”, including the UK Parliament1 (Green Paper: Internet Safety Strategy and Fake News Inquiry), the US Senate2 and the Singaporean Parliament3 (Green Paper: Deliberate Online Falsehoods). In early January 2018, the French President Macron said he would present a new law in order to fight the spread of “fake news”, which he said threatened democracies4. The Canadian Communications Security Establishment (CSE), in its report Cyber Threats to Canada´s Democratic Process, the first threat assessment of this kind in the world to be shared with the public said the Canadian Minister of Democratic Institutions5, states that:

The Internet age has ushered in new threats to the democratic process. Most social discourse related to the democratic process now occurs online. This includes email, tweets, websites, databases, computer networks, and many other information technologies used by voters, electoral bodies, political parties and politicians, and the media. Canada is among a large and growing group of states that must defend against adversaries using cyber capabilities to covertly influence all three aspects of the democratic process.6

The European Union´s (EU) first High-Level Expert Group on fake news and online disinformation held its inaugural meeting in January 2018, its mandate: contribute to the development of an EU-level strategy on how to tackle the spreading of “fake news” and disinformation7. The EU also held a public consultation on fake news and online disinformation. This howtoregulate article is focused on how the regulator could approach the “fake news” problem using the methodology outlined in the Handbook: How To Regulate?.

From these various assessments of supra-national institutions, parliaments and government agencies, we understand that the regulatory task is mainly to regulate online communications of “fake news” because they harm democratic institutions. However, we also draw from the public debate that there is a secondary goal, the protection of individuals targeted by “fake news”. We will see from some examples in this article how individuals can become victims of “fake news” as well. More concretely, regulation could pursue the following objectives:

  • Delimiting clearly what information is “fake news” or otherwise illicit so as to create a relatively high level of legal certainty;

  • To detect quickly “fake news” or otherwise illicit information;

  • To provide for fast and efficient counter-measures;

  • To sanction perpetrators, both to deter them and to create a climate of state responsiveness;

  • To create positive incentives for compliance (beyond sanctions);

  • To create a good basis for compliance by informing operators and the general public about the developed measures.

It goes without saying that at least the last objective can best be pursued by measures other than the regulation itself. The same is true for some of the other objectives. For example positive incentives by labelling can often be more quickly realised outside of regulation, regulation being rather slow. Hence the regulation needs to be embedded in further measures. Some of them are listed in Part D of this howtoregulate article.

A. Questions of substance for regulating “fake news”

I. What is “fake news”?

1. The regulator´s first step is to understand what is meant by “fake news”. Using the plain language of the term “fake news” suggests news that is not true and information that is false. In researching the regulatory response to “fake news” we find that the term has come to be used in a broad way to include news that is not true, conspiracy theories, fake studies, unjustified opinions and even news that is actually true but disagreeable to the reader. The Council of Europe (CoE) Report Information Disorder: Toward an interdisciplinary framework for research and policy making specifically refrains from using the term “fake news”. The Report cites two reasons: [“fake news”](1) is inadequate to describe the complex phenomena of information pollution; and (2) has come to be used by politicians around the world to describe news organisations whose coverage they find disagreeable. The report introduces the concept of information disorder to describe the following three problems:

  • Mis-information: when false information is shared, but no harm is meant. Operators of mis-information might be consumers who read information on their Facebook or Twitter feed and share it among their online community, their intent is to share information not necessarily checked for its accuracy.

  • Dis-information: when false information is knowingly shared to cause harm. These operators may have a profit intent (such as advertisers) or a political intent (politicians seeking power or a nation state seeking to disrupt the affairs of another nation state). The CSE report Cyber Threats to Canada´s Democratic Process provides a useful outline (pages 12 and 13 in particular) of the kinds of cyber threats to democratic processes and their motivations. An interesting example of this was the evolution of a conspiracy theory known as “Pizzagate” involving 2016 presidential candidate Hillary Clinton by fringe online groups with limited followers, which was then amplified by a variety of operators in 96 hours (after first post on Facebook) to be carried into mainstream news with thousands of followers8. “Fake news” of this nature could be countered by fact checkers, the information fact checked could be amplified through compulsory public service announcements on Facebook as an example.

  • Mal-information: when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere. Operators in this category usually have political motives. An example of mal-information was the hacking and release to Wikileaks of emails of key political staff working on the Democratic Party campaign during the 2016 US presidential election9. Often the best measure is vigilance, good internet and email security protocols could help to reduce the risk of being easily hacked.

This approach to “fake news” or information disorder focuses on the intent of the source of the information, which is important to understand in order to regulate the behaviours of such actors. This howtoregulate article will continue to use the term “fake news” noting its common use, even though it lacks a precise definition.

2. News and information can take the form of facts or opinions based on facts and events. Facts and events can either be true or false. For example, an event either happened or not and had certain characteristics that can be described, and facts are either true or false. Without going into a deep philosophical debate about truth, the truth of an event or fact can generally be determined by observation directly or through a credible witness or research from credible sources. Opinions, however, are very difficult to judge. The true or false dichotomy cannot be applied to opinions. Opinions can be characterised as justified or not based on the conclusions drawn but their truth cannot be proved. For example an opinion could be held that a politician is corrupt based on a value judgement based on other reports, but a politician is corrupt as a fact if they have been convicted in court for corruption or other proof of corrupt activity an be shown10. From a regulatory perspective it is probably straight forward to develop regulations about the truth of facts using labels that could weigh the truthfulness based on research, credibility of witnesses and fact checking websites. The US “Pizzagate” example provides an illustration of how easy it would have been to check the facts around this “fake news” about a pizza restaurant harming children, which culminated in a man being arrested with an assault rifle at the restaurant because he was investigating what he had read11. Careful thought will need to be given about how to regulate opinions based on “fake news”, in light of states´obligations around freedom of opinion and expression.

II. What is new about “fake news” online that existing regulations dealing with “fake news” in print cannot resolve?

3. Regulators need to understand how “fake news” operates online versus offline in print media or television media. Defamation and intellectual property laws have regulated content in print and on television for some time, and clearly the message from parliaments and governments of the world is that these regulations do not seem to be effective online. This howtoregulate article examines some of these regulations at Part III. There are of course significant differences of scale and amplification online, particularly on social media platforms such as Facebook, Twitter and YouTube, where one post or one tweet can reach millions in seconds. It is easy to see how a post or tweet shared hundreds, sometimes thousands, of times amongst our friends, by commentators´ whose opinions we trust and leaders could make one think that a “fake news” story is actually true. More and more citizens are going online to read their news, and this has had a significant effect on the print and broadcast media, particularly to the flow of advertising money. The well established professional rules and codes of ethics used by journalists helped to make print and broadcast news credible. Of course journalists online still adhere to such rules but producers of news online is not the sole domain of journalists. Instagramers, Tweeters, YouTubers and Bloggers also generate news which carry influence among their readership and there are no professional code of ethics for such news producers, except the rules of the social media platform.

III. Which measures are taken by telecommunication and other internet companies?

4. To some degree or another online social media have always engaged in some form of regulating of online communication. Both Facebook and Twitter have publicly acknowledged that their platforms are a place for authentic dialogue, that open exchange of information can have a positive impact on the world, which is why they require people to use the names they are known by and monitor and remove inauthentic accounts12. Facebook´s Community Standards state that content could be removed, audiences restricted and accounts disabled where content presents a genuine risk of physical harm or a direct threat to public safety, certain kinds of sensitive content and content containing the personal information of others without their consent13. Other online social media have similar policies that regulates its users´ content: Twitter has Twitter Rules ; YouTube; and Google has User Content and Conduct Policy. It is right and proper that a company is clear about its policies and terms of service, but when social media companies play such an influential role in how we send and receive information, is it appropriate that they are self-monitoring this information14? It is neither appropriate or feasible for the state to be responsible for monitoring online communication. And yet when we consider the scale of information to be monitored, there is probably no better qualified than the company who knows their social network platform well. Each country should make a careful assessment of the appropriateness of social media companies self-monitoring in their respective jurisdictions.

5. Telecommunications companies such as Vodaphone Group plc have blocked advertising on hate speech and “fake news” outlets worldwide:

Vodaphone, third parties acting on its behalf and its advertising platform suppliers (including, but not limited to, Google and Facebook) must take all measures necessary to ensure that Vodaphone advertising does not appear within hate speech and fake news outlets. We define these as outlets whose predominant purpose is the dissemination of content that is:

  • deliberately intended to degrade women or vulnerable minorities (“hate speech”) or

  • presented as fact-based news (as opposed to satire or opinion) that has no credible primary source (or relies on fraudulent attribution to a primary source) with what a reasonable person would conclude is the deliberate intention to mislead (“fake news”).

    Note that:

  • the term “outlet” encompasses all social media, digital, print and broadcast channels, sites, apps, programmes and publications;

  • the term “advertising” encompasses all forms of brand promotion including advertising, advertorial, sponsorship and co-marketing arrangements; and

  • these mandatory rules apply to all Vodaphone brands, subsidiary brands, joint venture brands and sub-brands.

    The hate speech and fake news definitions, above, apply to an outlet as a whole. The test is whether or not the predominant purpose of the entire outlet is to communicate and share this kind of harmful material. An outlet that carries some hate speech or fake news content – but where the majority of content disseminated would not meet the tests above – must not be categorised as warranting exclusion from advertising whitelists on hate speech/fake news grounds.15

Vodaphone also has a policy on Freedom of Expression and Network Censorship and so do other telecommunications companies16. Freedom of expression, not-for-profit groups such as the Global Network Initiative and industry groups such as the Telecommunications Industry Dialogue work with telecommunications companies to address freedom of expression and privacy rights in the sector according to international human rights standards.

6. However, none of the regulations that these social media and telecommunications companies have for online communications concern the veracity of the communication. Generally, the companies require users to register with their authentic email or phone number and name (although this is easily manipulated as we have seen from the recent US indictment of 13 Russians and 3 Russian entities17) because they believe this causes users to think more carefully about their communications18. In theory, a user could complain about the truth of a particular story to the social media platform and given social media´s commitment to authentic communication, this story might be taken down, but none of the social media platforms explicitly have a policy about “fake news”, although it is against Facebook´s terms for Pages to contain false, misleading, fraudulent, or deceptive claims or content19.

IV. What are the behaviours and intentions of those dealing with “fake news” ?

7. In order to regulate the online communications of “fake news” we need to describe and understand the landscape of actors (see in particular page 13 of the Handbook: How To Regulate?). Some thought should be given to the following actors that have some control or influence in online social communication:

  • Online social media companies: Facebook, Twitter and Google, want to ensure their users have a positive experience but also that advertisers continue to use their platform.

  • Online commentators: journalists, bloggers, social commentators have can reach more people via social media. For example regulations creating a duty on social media companies to be responsible for unlawful content on their sites.

  • Telecommunications companies: Verizon Communications Inc, AT&T Inc, Vodaphone Group plc, the popularity of social media means a strong market for internet subscriptions and mobile phone use. For example regulations prohibiting company advertising on “fake news” sites.

  • Digital Advertisers: includes those trying to market or advertise their product and those third-parties that make money from advertising by volume of clicks on the sites they manage. The above example of regulations prohibiting company advertising on “fake news” could apply to the third-party advertisers who scout advertising opportunities based on high volume sites.

  • Consumers: ordinary citizens that consume the online communication and share what they read. Consumers could be educated about the characteristics of “fake news” for example and understand their role and responsibility in probably unwittingly sharing “fake news”.

  • Politicians: given the influence of the internet and online social media it is usual that politicians want to reach out to their constituents via these means. As was recently used in the 2017 German elections, during elections, when “fake news” can often be rife, candidates could agree not to use social media bots, which was a factor that saw a lot of “fake news” shared around on US presidential candidates social media bots.

  • Nation States: Nation States can sometimes be the promulgator of “fake news” or disinformation for a variety of strategic reasons. An example of an approach by the EU to combat disinformation has been the creation of a website by the European External Action Service East Stratcom Task Force called EU vs Disinfo which aims to better forecast, address and respond to pro-Kremlin disinformation. The EU vs Disinfo also has a database of over 3,500 disinformation cases since September 2015, which is publicly accessible and updated weekly.

8. At a minimum any communication involves two stages, the first is the sender or operator of the “fake news”, who may not necessarily be the author, and the second stages is the receiver or the consumer of “fake news”. Regulators should of course look at the behaviours of those at the first stage of the “fake news” communication and find ways of reducing or stopping the communication from occurring in the first place. Noting the millions of communications that occur, it would not be possible to capture all communications and so the second stage, the receiver of information, would require regulatory attention too. How does the receiver of the “fake news” communication internalise the message and not want to check the veracity of the communication before sharing it? Or what causes the receiver to act on the communication? For example there was a case in Germany whereby a missing girl (it was later revealed she was at friend´s house) was turned into “fake news” of a girl who was sexually assaulted by immigrants, and operators of the “fake news” encouraged people via Facebook to join in anti-immigration protests, of which many did20. So the problem here is that “fake news” can be about mis-information and dis-information but also can be used to incite action. For example some countries try to regulate peaceful protest by delimiting areas for various groups, particularly those whose views are diametrically opposed, to not be protesting beside each other. However, these measures require groups to inform the public authority of the day and time of protest, usually pop-up protests advertised on Facebook or on closed social network apps like Watsapp, are unlikely to follow any such rules that might exist. Regulations aimed at those inciting pop-up protests are not easily designed and require careful thought. But regulations could provide for legal empowerments for police and other authorities to act by public announcements to counter “fake news” targeting individuals to act or behave in a certain way. The recent UK announcement to create a dedicated national security unit to tackle fake news and disinformation by combating disinformation by state actors and others is an example of such an empowerment21. Other regulatory measures are expanded on at Part D.

B. International and supranational regulatory framework

State operations against “fake news” are to be measured against legal obligations deriving from international human rights instruments, as the freedom of expression might be affected.

I. International Covenant on Civil and Political Rights (ICCPR)

1. Any regulation designed to regulate online communications of “fake news” should be consistent with states´ parties obligations under Article 19 of the International Covenant on Civil and Political Rights (ICCPR):

1. Everyone shall have the right to hold opinions without interference.

2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.

3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary:

(a) For respect of the rights or reputations of others;

(b) For the protection of national security or of public order (ordre public), or of public health or morals.22

2. Similar obligations are contained in Article 10 of the European Convention on Human Right, Article 9 of the African Charter on Human and Peoples´ Rights and Article 23 of the Association of Southeast Asian Nations Human Rights Declaration to name a few. The right to freedom of opinion and expression is not absolute and excludes “advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence”23, and “arbitrary or unlawful interference with privacy, family, home or correspondence … unlawful attacks on [a person´s] honour and reputation24. However, these restrictions to freedom of opinion and expression must be sufficiently clear, accessible and predictable, as well as proportional and laid out in national legislation25. The proportionality of the restriction must “target a specific objective and do not unduly intrude upon the rights of targeted persons”26, the intrusion must be limited and justified in the interest supported by the intrusion. Regulatory measures aiming to restrict content of “fake news” must be “the least intrusive instrument among those which might achieve the desired result”27.

3. Freedom of opinion and expression is an important part of democracy because a market place of ideas where opinions and expressions genuinely held, are shared freely, actually strengthens democracy. However, the opinions and expressions that are shared are often not genuinely held. Often the intent of sharing information is not to participate in a market place of ideas but is actually designed to stifle debate, and if not to silence, but to drown out dissent by flooding the market place with the same idea by automated bots using fake accounts. This may mean that some regulatory measures to restrict information disorder that could harm some democratic institutions may not be chosen because it erodes freedom of opinion and expression too much.

II. European Convention on Human Rights / Council of Europe

4. The Council of Europe uses criteria to measure its member states implementation of freedom of expression on the internet, which provides a useful check list for analysing appropriate regulatory measures and include:

  • Restrictions of internet content are prescribed by law, pursue the legitimate aims set out in Article 10 of the Convention and are necessary in a democratic society. The law provides for sufficient safeguards against abuse, including control over the scope of restriction and effective judicial review.

  • The scope of any measure to block or filter internet content is any determined by a judicial authority or an independent body having due regard to their proportionality.

  • Internet intermediaries do not monitor their users, whether for commercial, political or any other purposes.

  • Internet intermediaries are not held responsible for the information disseminated via the technology they supply except when they have knowledge of illegal content and activity and do not act expeditiously to remove it.

  • Internet intermediaries do not censor content generated or transmitted by internet users

  • There is no surveillance of internet users’ communication and activity on the internet except when this is strictly in compliance with Article 8 of the Convention.

  • Educational policies are in place to further media and information literacy and improve users’ skills, knowledge and critical understanding of content online.

C. National regulations

As only one jurisdiction has adopted an explicit legislation against “fake news”, we present in this Part C this legislation followed by overviews on regulation on topics which are connected to “fake news”. It is worthwhile mentioning that although only one jurisdiction has adopted explicit legislation against “fake news”, the French are expected to introduce a draft law in the “coming days” that focuses on `confidence and reliability of information´ in the lead up to elections and will also reform the Bichet law that governs the distribution of the press28. The UK has not ruled out introducing legislation to tackle “fake news”, for example by reforming the UK legal definition of publishers to include Facebook and Google so that they bear responsibility for content on their sites, but that the preference is to work with companies and industry to deliver a safe internet for all29. The UK has so far introduced a voluntary levy to pay for a range of measures to combat and raise awareness about online bullying and other web dangers and proposes to develop a social media code of practice to boost efforts to combat online bullying, intimidation or humiliation, and an annual `internet safety transparency report´30.

I. Germany: currently the first and only “fake news” legislation

1. Responding to increasing spread of hate crime on the internet, defamation and the spread of “fake news” (the most famous being the “Lisa case”31) , the German parliament approved the Netzdurchsetzunggesetz (NetzDG) on 30 June 2017 (full effect on 1 January 2018). Germany is the only jurisdiction to have passed legislation dealing with “fake news” as it relates to hate speech and other unlawful content on social networks such as Facebook, Twitter and YouTube. The German regulatory approach focuses on enforcing existing German criminal law online by setting standards for the complaints mechanism of social network platforms and reporting on unlawful content. The Network Enforcement Act (NetzDG) requires telemedia service providers which, for profit-making purposes, operate internet platforms designed as a social network to remove unlawful content (Section 1 Scope) defined as that:

[w]hich fulfils the requirements of the offences described in Sections 86 [Dissemination of Means of Propaganda of Unconstitutional Organisations], 86a, 89a, 91, 100a [Treasonous Falsification], 111 [Public Incitement to Crime], 126, 129 to 129b [Formation of Criminal/Terrorist Organisations], 130, 131 [Representation of Violence], 140, 166 [Insulting of Faiths, Religious Societies and Organisations Dedicated to a Philosophy of Life], 184b in connection with 184d, 185 to 187 [Insult/Malicious Gossip/Defamation], 241 or 269 of the Criminal Code and which is not justified.32

2. The NetzDG avoids the tricky problem of defining “fake news” by focusing on unlawful content, unlawful as defined in the German Criminal Code and its associated case law. This includes defamatory views or statements against natural or legal persons (Insult/Malicious Gossip/Defamation), wherefore the majority of “fake news” are covered by the NetzDG. NetzDG requires telemedia service providers to remove manifestly unlawful content within 24 hours or face fines of up to €50 million and to provide German users an easily accessible online system to make complaints. Other unlawful content would have to be taken down or blocked without delay, and in general within 7 days, or longer depending on other factual circumstances. If the complaint is sent to the independent self-regulatory body the 7 days deadline does not apply. NetzDG requires the telemedia service provider to assess whether content is unlawful but also allows for the creation of an independent self-regulatory body funded by telemedia service providers to also assess unlawful content33. The creation of an independent self-regulatory body solves issues around `who is judging´ the content as unlawful and concerns around freedom of opinion and expression.

3. Telemedia service providers (with more than 2 million registered users in Germany) that receive more than 100 complaints per calendar year about unlawful content are obliged to publish a report covering the points at Section 2, subsection 2) of NetzDG. Some of the points include efforts by the provider to eliminate criminal punishable activity on the platform; the mechanism for submitting complaints and the criteria applied in deciding whether to delete or block unlawful content; organisation, personnel resources, specialist and linguistic expertise in the units responsible for processing complaint, including training; and number of complaints for which an external body was consulted in preparation for making the decision, among others34. Transparency around the criteria applied in deciding unlawful content helps users to know what kinds of material is being blocked and such information will promote more discussion about the proportionality between freedom of opinion and expression and regulating unlawful content. NetzDG also requires providers with more than 2 million registered users in Germany to maintain an effective and transparent procedure for handling complaints about unlawful content, which lists the minimum procedures to be included at Section 3, Subsection 2.

4. The German approach to regulating unlawful content on social networks has been analysed by the Special Rapporteur for freedom of opinion and expression in a Letter to the German government and the German government provided a Letter of Response. These two letters provide a useful explanation of the issues raised by NetzDG for freedom of opinion and expression and prohibiting advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.

5. Another regulatory tool used in Germany during the elections in 2017, was that the major political parties agreed to refrain from using social media bots in the election35. Although this was not a regulatory measure provided by legislation or made by the government agency responsible for elections, it was useful in reducing the spread of information disorder. The use of automated bots by some candidates in the US 2016 election, by comparison, was thought to have facilitated the spread of “fake news”36. In theory jurisdictions could prohibit the use of bots by political parties or candidates. This would help to identify illicit use of bots by other operators.

II. Regulation of other jurisdictions on defamation

6. Most countries have defamation regulations that enable a person to file a civil suit if she/he believes her/his reputation has been unfairly undermined. Reputation is usually defined by the court and in practice the standard of damage to the reputation is different for celebrities and politicians versus ordinary people. Facebook has a Defamatory Reporting Form to deal with complaints of defamation, so too does YouTube and Instagram as it relates to unlawful content under the NetzDG. Many countries also have criminal defamation, which enables the state to initiate criminal prosecutions against authors of defamatory material in the interest of the public. Defamation regulations have influenced social media companies to self-regulate by prohibiting users from producing defamatory content and establishing a complaints mechanism for investigating potential defamatory content. Noting the differences in defamation laws between jurisdictions, regulators should investigate how useful such laws have been as a deterrence for online defamatory content. The ease with which reputations can be harmed on the internet, under the cover of anonymity may make it easier for people to say what they like.

III. Regulation of other jurisdictions on Intellectual property

7. Intellectual property (IP) laws cover a wide range of civil rights and protections such as copyright, trademarks, logos and rights with respect to a person´s name and likeness. Like defamation laws, IP laws have influenced social media companies to prohibit users from infringing the IP rights of others and have established complaints mechanisms. “Fake news” stories have either passed themselves off as stories from reputable news outlets or used a name similar to reputable news outlet, for example instead of the New York Times the New York Reporter. IP rights can only be enforced by the owner, and the owner must prove they have a right either through registration, as is the case with patents, designs and trademarks, or through copyright legislation. For example Facebook´s IP complaints form makes it clear that only IP owners may complain about infringement, users who see an IP infringement are encouraged by Facebook to contact the owner of the IP. The downside with IP enforcement online is that the extent of infringement may not justify the cost of bringing infringement proceedings.

IV. Trade practices and consumer protection

8. Trade practices and consumer protection laws aim to ensure that businesses and individuals do not engage in consumer fraud or deception and provide safe products and services to consumers. Such laws have influenced social media companies to self-regulate and prohibit the sale of regulated goods such as prescription drugs, marijuana, firearms or ammunition. State trade and consumer agencies are often empowered to investigate questionable trade practices. The US Federal Trade Commission is empowered to investigate questionable trade practices and take appropriate enforcement action. The US Federal Trade Commission also permits consumer complaints on a broad range of topics including, identity theft, unfair business practice, computers, the internet and online privacy and credit scams37. The Acai Case shows how “fake news” sites take on characteristics of trustworthy news sites such as their logo and promote products, in this case the diet supplement acai.

IV. Measures of various jurisdictions on education

9. Regulations about online communications or “fake news” can also include measures to educate people in ways that encourage behavioural change that could reduce the influence of “fake news” or through funding projects that aim to reduce the negative effect of “fake news”. We have found the following commendable examples from private and public initiatives:

  • Finland: Faktana, kiitos! is a project funded by the Tiina and Antti Foundation (independent foundation that supports and promotes social welfare, culture, the environment and science) to bring journalists to schools in Finland to teach media literacy. The purpose of each school visit it to support the students in the independent evaluation of information and to raise their responsibility: in social media everyone has an influence on what kind of knowledge is spreading.

  • Czech Republic: Hate Free Culture is a government-funded anti-xenophobia project that tackles xenophobia by providing education in schools, training to police on how to deal with hate crimes and consulting with local officials. With the rise of “fake news” online Hate Free Culture started a hoax-debunking website, where the public could send stories to be fact checked.

  • US: Librarians: Librarians in the US are educating students about digital literacy, including testing students´ knowledge about the difference between advertising, publicity, propaganda, news and opinion pieces38.

  • US: Allsides: is a private organisation whose mission is to free people from filter bubbles so they can better understand the world and each other. An interesting technique they have developed is a Media Bias Ratings, which provides readers a sense of the political leanings of the source, using US defined ratings of hard left, small left, centre, small right and hard right. Allsides has disclosed how it developed the rates for media bias using a number of tools such as blind survey, third party data and community feedback.

  • Spain: Transparent Journalism Tool: the aim of the tool is to introduce radical transparency in the editorial process, which will allow readers to see all the information behind a story, including the reasons for covering a certain topic, the number of people working on the content, validated sources and consulted documents. The Spanish news site Público will be launching this tool soon in its stories because they believe that news traceability is key to restoring citizens´trust in the media.

  • Austria: Addendum is an investigative journalism project that produces a television feature on a private broadcaster as well as detailed articles online. It is funded by a not-for-profit Quo Vadis Veritas Foundation. Noting that resources for investigative journalism was reducing in response to advertising revenue moving to other more lucrative platforms eg Google and Facebook, fact-based journalism also reduced. The Addendum project focuses on solid data and facts for others to use, so that there is a trustworthy market for evidence-based journalism.

D. Ensuring compliance

In this Part we investigate which measures could be taken to increase compliance. This part is inspired by Chapter 7 of the Handbook: How To Regulate? and the recommendations in the Council of Europe Report Information Disorder: Toward an interdisciplinary framework for research and policy making, reproduced at the end of the article.

I. Accreditation

1. The regulator should think about how well citizens understand the differences between a news story, an opinion piece, a sponsored article, propaganda and advertising. This is where accreditation can be useful because advertising could be influenced to focus on accredited news outlets, readers could easily source reliable news, algorithms could preference news stories and searches of accredited sites. There are many independent, non-government organisations that could start an accreditation system, such as the International Press Institute, whose mission is to promote the conditions that allow journalism to fulfil its public function, the most important of which International Federation of Journalists might be an appropriate accreditation body as its mission includes promoting media freedom, professionalism and ethical standards.

II. Labelling

2. Labelling is starting to be used more online to identify news according to whether it is sponsored, advertising, a bias rating etc. Product labelling has been successful in the kosher food industry and in products from fair trade industry. There could be some value in regulators designing some standards around journalist ethics and codes of practices or perhaps some form of self-regulation organised by the International Press Institute or the International Federation of Journalists. This would also need to include education around what a counterfeit label might look like. Evidently, a specific label could be linked to the accreditation mentioned in the previous section.

III. Public service announcements

3. Many YouTube videos have some seconds devoted to advertising. Regulators could consider requirements for such platforms to dedicate a number of seconds for public service announcements concerning media literacy, the danger of “fake news”, the value of accreditation to news outlets to name but a few.

IV. Regulating advertising

4. The Council of Europe Report on Information Disorder suggests states should draft regulations to prevent any advertising from appearing on fabricated “news” sites. The US has regulations disclosing who has funded campaign advertisements in print, radio and television media but does not apply to online advertising39. A bill is currently before Congress, the Honest Ads Act, to remedy this problem by requiring social media companies such as Facebook and Google to disclose who funded advertisements on their platforms, which would ensure that ads are not directly or indirectly funded by foreign countries. Disclosure about who funded an advertisement may be useful to consider outside of election periods as well as who are the users targeted. As previously mentioned, Vodaphone have taken the initiative of blocking advertising on hate speech and “fake news” outlets worldwide, and have a good definition for third-parties that organise Vodaphone advertising of hate speech and “fake news” outlets, see Part I, paragraph 5. Given that Vodaphone has taken this step in the absence of regulation may indicate that it might not require legislative changes to encourage other companies to do the same.

V. Complaints portals and identification algorithms

5. At least for certain particularly severe infringements, citizens and economic operators should have the possibility to alert authorities via complaints portals. If the authorities in charge of the supervision and enforcement use themselves suitable algorithms for the identification of “fake news”, they will have two sources of information which can be intelligently combined to identify the most important cases of “fake news” which merit ex officio action of the authorities.

VI. Legal action of competitors

6. Given that state authorities might have limited administrative capacities, it might be suitable to give competitors the option to sue their peers in case of infringement. E.g. internet service providers could obtain the right to sue their competitors if these remain inactive in case of evident “fake news”. It is in the interest of the policing operator that his peers ensure the same level of compliance. Evidently, this measure should be taken with the right dosage, taking into account the capacities of the courts in charge.

VII. Legal aid for victims of “fake news”

7. Financial support and free advice to victims of “fake news” might enable the victims to take legal measures against perpetrators. Empowered victims will increase the level of compliance by sanctioning perpetrators.

VIII. Class actions and legal action by accredited organisations

8. The U.S. technique “class action” is slowly infiltrating other jurisdictions. It consists of permitting law firms to raise a legal claim on behalf of a group or “class” of individuals with similar legal claims, without being formally empowered by each of them individually. Class actions are usually allowed in civil cases only. Class actions are a powerful but limited tool because individuals´ can amplify their ability to litigate, negotiate and settle disputes, however, strength in numbers also limits choices and options. For example some disadvantages, depending on the jurisdiction could include, conflicts between different parties involved, settlement money or money from a successful claim may be lower than if you were to take your own case to court, limited ability to control proceedings or extinguishing a right to later bring an individual claim to court.40

9. Another legislative technique is to empower accredited, mostly private organisations to take legal action in the public interest. This technique has been used by various jurisdictions in the field of environmental protection41 or animals rights. In international law the Aarhus Convention establishes a number of rights of the public (individuals and their associations) to information, to participate in decision-making and the right to challenge public decisions concerning the environment. These rights are laid out in the Convention on Access to Information, Public Participation in Decision-Making and Access to Justice in Environmental Matters.

IX. Policing by private, but public utility organisations

10. The technique presented in the last paragraph is basically just a variant of the technique to use private organisations as aids for surveillance and the enforcement. There are plenty of ways to refer to them. Evidently, most cost some money, though much less than surveillance and enforcement by public agents. But even when there is no funding available, it might be possible to cooperate with NGOs whose members engage as volunteers against “fake news”. This form of citizen engagement can be integrated into a comprehensive regulatory strategy. In this context, we recommend considering non-monetary rewards, like titles. As accredited “Internet surveillance stewards”, citizens might feel more motivated to take-over policing tasks than without such a title.

X. Self-regulating bodies and certification

11. As demonstrated in the case of Germany (see above Part C, Section I), the installation of self-regulating bodies is a very useful technique complementing other compliance measures. In addition to the recognition of self-regulation institutions by the administrative authority permitted in the German Network Enforcement Act42, we recommend self-regulating bodies also to provide basic certification with regard to the processes to be installed by internet service providers to identify “fake news” or other illicit internet content. Given the high number of infringements, the internal processes of service providers are of utmost importance. These processes need to be organised based on “best practices”. Whilst it might go too far to require external quality system certification, certification by a self-regulating body could be a proportionate way for verifying that suitable internal processes have been established.

XI. Legal empowerments for state counter-action

12. As much as we favour the involvement of private operators in policing, there will be cases where only action by a very much trusted institution, such as the police, can remedy the negative effect caused by “fake news”. This is in particular the case when helpless private persons are targetted. Accordingly, it might be commendable to provide for special legal empowerments for police and other authorities to act by public announcements to counter “fake news”, in particular when these target individuals. In addition, the authorities need an empowerment to oblige operators to delete “fake news” or other illicit content or to post countering information together with the “fake news” or illicit content.

13. All these empowerments should be extended to preliminary measures in case of mere suspicion of “fake news”, giving the authorities the means to act even when there are still some doubts, but a strong likelihood for the incriminated information to be “fake”. The empowerments should be accompanied by empowerments and obligations for authorities and private persons to exchange data, though of personal nature (exception to privacy laws in view of the prevailing interests). Reference to classic investigative empowerments in the respective jurisdiction might well complement the toolbox of the authorities in charge of ensuring compliance.

14. Furthermore, other authorities and certain other key actors (e.g. private associations entrusted by the state to work for the public good) should be obliged to cooperate in the detection and countering of the “fake news”.

15. If you so wish, you could consult page 91 of the Handbook: How To Regulate?, which outlines the pros and cons of general enforcement powers or explicit and detailed inspection and enforcement powers. Where general enforcement powers are provided for in the regulation, it should also explicitly mention the most far reaching measures such as take-down measures or blocking. In jurisdictions which require extremely precise and delimited empowerments, regulators might appreciate studying Singapore´s Air Navigation Act 2014 which contains comprehensive empowerment at Division 2 and 3. Our howtoregulate article “Regulating Research and Technology Risks: Part II – Technology Risks” also contains information about empowerments in relation to identified risks at paragraph 10.

XII. Penal and administrative sanctions

16. Whilst penal sanctions are mostly on the radar of regulators, we recommend considering an addition administrative sanction against the legal bodies (the economic operators) which do not ensure compliance. The administrative sanction does not need to be based on negligence of an individual – negligence of the entire organisation or bad management might suffice. Hence it is more easy to prove the conditions for the sanctions which ensures a higher degree of efficiency. The German NetzDG imposes regulatory fines committed by any person who, intentionally or negligently fails to produce the reports outlined in Act, fails to provide a procedure for complaints, fails to monitor complaints, fails to offer training and support amongst others listed at Section 4, subsection 1 of the Act. Regulatory fines may be made up to five million euros (Section 4, subsection 2 NetzDG).

XIII. Information of economic operators and the general public

17. Information is the basis for compliance. Good information can rarely be provided by regulation alone. Hence regulators should consider accompanying measures. However, regulation can also contain provisions obliging certain key economic operators to inform other economic operators and the general public. The possibility to disseminate crucial information via key economic operators should be considered in this context. E.g. key internet service providers could be obliged:

  • to inform their costumers on certain key provisions and tools against “fake news”;

  • to oblige, via their contracts, those of their clients who are themselves economic operators to do the same.

XIV. Labelling and rating

18. We have stated above that labelling can be established quicker outside regulation. The same is true for quality rating, e.g. in view of compliance with regulation against “fake news”. However, there is a big advantage of establishing these mechanisms by regulation. The legal basis makes the labelling or rating more reliable, trustworthy and easier to execute, in particular with regard to legal concerns. Hence both should be considered.

XV. Openness to future phenomena – extension to other illicit content

19. In the last Part D, we have sometimes used the term “fake news or other illicit content”. The purpose was on one hand to invite regulators to consider whether at the border of “fake news”, there are situations which they wish to cover as well. But it has also a secondary purpose of indicating that quite some measures presented in this Part D could also be applied in view of other illicit content than “fake news”. Again, the German approach is recommended for consideration. There is a high likelihood that ever new forms of illicit content will appear on the internet. Future-proof regulation should be open to integrate new forms of illicit content. Wishing to strike a balance between openness on one hand and legal preciseness, regulators might consider empowering the authority responsible for the execution to fine-tune or even extend the scope of the regulation on “fake news” in view of certain policy parameters.

Further links

UK, “Green Paper: The Internet Safety Strategy”, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/650949/Internet_Safety_Strategy_green_paper.pdf.

US Senate Select Committee on Intelligence, “Disinformation: a primer in Russian active measures and influence campaigns”, https://www.intelligence.senate.gov/hearings/open-hearing-disinformation-primer-russian-active-measures-and-influence-campaigns-panel-i#.

Singapore Parliament, “Green paper: Select Committee on Deliberate Online Falsehoods – Causes, Consequences and Countermeasures”, https://www.parliament.gov.sg/sconlinefalsehoods.

Communications Security Establishment of Canada, Cyber Threats to Canada´s Democratic Process, https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjJv4i1vID6AhWlx4UKHeQ9Bx0QFnoECC8QAQ&url=https%3A%2F%2Fcyber.gc.ca%2Fsites%2Fdefault%2Ffiles%2Fcyber%2F2021-07%2F2021-threat-to-democratic-process-3-web-e.pdf&usg=AOvVaw14x4USet76mxx7JRdm-TtH.

Moore, M., Centre for the Study of Media, Communication and Power, Kings College London, Submission to: Inquiry into Fake News, p. 11 https://www.kcl.ac.uk/policy-institute/assets/cmcp/cmcp-consultation-fake-news.pdf.

Council of Europe, “Report Information Disorder: Toward an interdisciplinary framework for research and policy making”, https://rm.coe.int/information-disorder-report-2017/1680766412.

This article was written by Valerie Thomas, on behalf of the Regulatory Institute, Lisbon and Brussels.

Annex: Recommendations contained in the Council of Europe “Report Information Disorder: Toward an interdisciplinary framework for research and policy making”:

What could national governments do?

  1. Commission research to map information disorder. National governments should

commission research studies to examine information disorder within their respective countries, using the conceptual map provided in this report. What types of information disorder are most common? Which platforms are the primary vehicles for dissemination? What research has been carried out that examines audience responses to this type of content in specific countries? The methodology should be consistent across these research studies exercises, so that different countries can be accurately compared.

  1. Regulate ad networks. While the platforms are taking steps to prevent fabricated ‘news’ sites from making money, other networks are stepping in to fill the gap. States should draft regulations to prevent any advertising from appearing on these sites.
  2. Require transparency around Facebook ads. There is currently no oversight in terms of who purchases ads on Facebook, what ads they purchase and which users are targeted. National governments should demand transparency about these ads so that ad purchasers and Facebook can be held accountable.
  3. Support public service media organisations and local news outlets. The financial strains placed on news organisations in recent years has led to ‘news deserts’ in certain areas. If we are serious about reducing the impact of information disorder, supporting quality journalism initiatives at the local, regional and national level needs to be a priority.
  4. Roll out advanced cyber-security training. Many government institutions use bespoke computer systems that are incredibly easy to hack, enabling the theft of data and the generation of mal-information. Training should be available at all levels of government to ensure everyone understands digital security best practices and to prevent attempts at hacking and phishing.
  5. Enforce minimum levels of public service news on to the platforms. Encourage platforms to work with independent public media organisations to integrate quality news and analysis into users’ feeds.

What could education ministries do?

  1. Work internationally to create a standardised news literacy curriculum. Such a

curriculum should be for all ages, based on best practices, and focus on adaptable research skills, critical assessment of information sources, the influence of emotion on critical thinking and the inner workings and implications of algorithms and artificial intelligence.

  1. Work with libraries. Libraries are one of the few institutions where trust has not declined, and for people no longer in full time education, they are a critical resource for teaching the skills required for navigating the digital ecosystem. We must ensure communities can access both online and offline news and digital literacy materials via their local libraries.
  2. Update journalism school curricula. Ensure journalism schools teach computational monitoring and forensic verification techniques for finding and authenticating content circulating on the social web, as well as best practices for reporting on information disorder.

1UK Parliament Commons Select Committee for Digital, Culture, Media and Sport, “Fake news evidence session in Washington D.C.”, https://committees.parliament.uk/committee/378/digital-culture-media-and-sport-committee/news/103468/fake-news-evidence-session-in-washington-dc/.

2US Senate Select Committee on Intelligence, “Disinformation: a primer in Russian active measures and influence campaigns”, https://www.intelligence.senate.gov/hearings/open-hearing-disinformation-primer-russian-active-measures-and-influence-campaigns-panel-i#.

3Singapore Parliament, “Green paper: Select Committee on Deliberate Online Falsehoods – Causes, Consequences and Countermeasures”, https://www.parliament.gov.sg/sconlinefalsehoods. The Green Paper states “Around the world, falsehoods are being deliberately spread online, to attack public institutions and individuals. The aim is to sow discord amongst racial and religious communities, exploit fault-lines, undermine public institutions, interfere in elections as well as other democratic processes”.

4Chrisafis, A., The Guardian online, “Emmanuel Macron promises ban on fake news during elections” 3 January 2018, https://www.theguardian.com/world/2018/jan/03/emmanuel-macron-ban-fake-news-french-president.

5Government of Canada News Release, Protecting Canada´s democracy from cyber threats, 16 June 2017, https://www.canada.ca/en/democratic-institutions/news/2017/06/protecting_canadasdemocracyfromcyberthreats.html.

7EU news release online, “Tackling the spreading of fake news and disinformation”, 15 January 2018, https://ec.europa.eu/commission/news/tackling-spreading-fake-news-and-disinformation-2018-jan-15_en.

9US Office of the Director of National Intelligence, Intellligence Community Assessment: Assessing Russian Activities and Intentions in Recent US Elections, “Cyber Espionage Against US Political Organizations” p. 2 https://www.dni.gov/files/documents/ICA_2017_01.pdf.

10International Press Institute, Freedom of Expression, Media Law and Defamation: A Reference and Training Manual for Europe, pp. 45-46 https://issuu.com/internationalpressinstitute/docs/foe-medialaw-defamation_eng.

12Facebook Community Standards, Using Your Authentic Identity: How Facebook´s authentic identity policy creates a safer environment, https://www.facebook.com/communitystandards#using-your-authentic-identity and Twitter for Good https://about.twitter.com/en/who-we-are/twitter-for-good.

13Facebook Community Standards page, https://www.facebook.com/communitystandards.

14Moore, M., Centre for the Study of Media, Communication and Power, Kings College London, Submission to: Inquiry into Fake News, p. 11 https://www.kcl.ac.uk/policy-institute/assets/cmcp/cmcp-consultation-fake-news.pdf.

15Media Release, Vodaphone blocks advertising on hate speech and fake news outlets worldwide, 06 Jun 2017 https://news.vodafone.co.nz/article/vodafone-blocks-advertising-hate-speech-and-fake-news-outlets-worldwide#.

17US Special Counsel Robert Mueller Indictment https://www.justice.gov/file/1035477/download.

18Facebook Community Standards, Using Your Authentic Identity: How Facebook´s authentic identity policy creates a safer environment, https://www.facebook.com/communitystandards#using-your-authentic-identity.

19Testimony of Colin Stretch, General Counsel, Facebook, United States Senate Select Committee on Intelligence, hearing on “Social Media Influence in the 2016 US Elections”, p. 6 (Nov 1, 2017) https://www.intelligence.senate.gov/sites/default/files/documents/os-cstretch-110117.pdf.

20Meister, S., NATO Review, “The `Lisa case´: Germany as a target of Russian disinformation”, https://www.nato.int/docu/Review/2016/Also-in-2016/lisa-case-germany-target-russian-disinformation/EN/index.htm.

21Walker, P., The Guardian, “New national security unit set up to tackle fake news in UK”, 23 Jan 2018, https://www.theguardian.com/politics/2018/jan/23/new-national-security-unit-will-tackle-spread-of-fake-news-in-uk.

24Ibid. Article 17(1).

25Human Rights Committee, General comment No. 34 (CCPR/C/GC/34), 12 September 2011 paragraph 25 http://www2.ohchr.org/english/bodies/hrc/docs/gc34.pdf.

26UN General Assembly, Human Rights Council 29th Session, “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression” (A/HRC/29/32) 22 May 2015, paragraph 35 https://documents-dds-ny.un.org/doc/UNDOC/GEN/G15/095/85/PDF/G1509585.pdf?OpenElement.

27See n 13, paragraph 34.

28Le Monde, “La ministre de la culture précise les contours de la loi contre les «fake news»”, 13 Feb 2018, http://www.lemonde.fr/actualite-medias/article/2018/02/13/la-ministre-de-la-culture-precise-les-contours-de-la-loi-contre-les-fake-news_5256147_3236.html.

29Walker, P., The Guardian, “Google and Facebook to be asked to pay to help UK tackle cyberbullying”, 11 Oct 2017, https://www.theguardian.com/technology/2017/oct/11/government-considers-classifying-google-facebook-publishers.

30Walker, P., The Guardian, “UK government considers classifying Google and Facebook as publishers”, 11 Oct 2017, https://www.theguardian.com/technology/2017/oct/11/google-and-facebook-to-be-asked-to-pay-to-help-tackle-cyberbullying.

31Meister, S., NATO Review, “The `Lisa case´: Germany as a target of Russian disinformation”, https://www.nato.int/docu/Review/2016/Also-in-2016/lisa-case-germany-target-russian-disinformation/EN/index.htm.

32Section 1, subsection 1 of the German Network Enforcement Act https://germanlawarchive.iuscomp.org/?p=1245.

33Ibid. Section 3, subsection 2, paragraph 6 .

34Ibid. Section 2, subsection 2.

35Shalal, A. & Auchard, E., German election campaign largely unaffected by fake news or bots, Reuters, 22 September 2017, https://www.reuters.com/article/us-germany-election-fake/german-election-campaignlargely-unaffected-by-fake-news-or-bots-idUSKCN1BX258.

36Neudert, L., Computation Propaganda Research Project: Working Paper No. 2017.7, “Computational Propaganda in Germany; A Cautionary Tale”, University of Oxford, p. 8, http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Comprop-Germany.pdf.

37Submit a Consumer Complaint to the US Federal Trade Commission, https://www.ftc.gov/faq/consumer-protection/submit-consumer-complaint-ftc.

38Large, J., The Seattle Times, “Librarians take up arms against fake news”, 6 February 2017, https://www.seattletimes.com/seattle-news/librarians-take-up-arms-against-fake-news/.

40Australian Securities and Investment Commission, “Class actions”, https://www.moneysmart.gov.au/investing/invest-smarter/problems-with-your-investments/class-actions.

41Section 487 of the Australian Environment Protection and Biodiversity Conservation Act is an example of a jurisdiction creating a legislative right to sue by outlining a less stringent threshold for legal standing to environmental groups who might not necessarily have standing because they are not affected (in a direct sense) by a decision made under the Environment Protection and Biodiversity Conservation Act .

42Section 3, subsection 7 of the German Network Enforcement Act.

Leave a Reply

Your email address will not be published. Required fields are marked *

seven + 3 =