ADVERTISEMENTREMOVE AD

On Facebook, Twitter, YouTube, Can We Say Whatever We Like? No

Human rights orgs have documented how governments are curbing free speech on the pretext of controlling fake news.

Published
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

Falsehoods, misleading information, and hate appears everywhere, especially online. Though so-called ‘fake news’ narrowly defined may be less pervasive than is sometimes assumed – one study found that it makes up about 0.15 percent of the average American media diet – misinformation and hate are real and widespread problems, and can cause serious harm, especially to already marginalised or vulnerable communities.

What can we do to limit the harm that different kinds of online misinformation and hate can cause?

This is a defining question of our time, one that we have to confront as citizens, and one that government are increasingly asking – including the Indian government, which has recently complained to the Supreme Court that there is “absolutely no check on the web-based digital media”, pointing to both big digital platforms such as Facebook, Twitter, and YouTube, as well as portals and individual online publishers.

ADVERTISEMENTREMOVE AD

How Companies Police Potentially Illegal Forms Of Expression – And Engage In ‘Content Moderation’

The basic “what can we do” question quickly unfolds into a host of other questions, including who decides what can be said and widely disseminated online, on what basis, with what enforcement, what kinds of transparency, and which kinds of due process?

The bottom line is: who should decide what we can say online?

The status quo in most of the world is essentially this: politicians write laws drawing lines between legal and illegal speech, anyone can, in principle, appeal to the big digital platforms on the basis of these laws (whether around copyright, libel, hate speech, or any other potential illegal forms of expression) and they may decide to act, which they also do when courts decide on a case-by-case basis.

The companies also, to different degrees and in different ways, proactively try to police potentially illegal forms of expression, and often engage in ‘content moderation’ that go well beyond the letter of the law on the basis of ‘community standards’ and terms of service drawn up by the platforms and enforced more or less as they see fit.

In most parts of the world, if a private company decides that Holocaust denial is alright but depictions of female nipples are unacceptable, they are free to allow the former but remove the latter.

We Have No Legal Right To Say Whatever We Like On Facebook, Twitter, Or YouTube

International human rights law and many national constitutions protect our freedom of speech, which is not limited to statements that are deemed ‘correct’ or that governments find acceptable, and also protects forms of expression that are shocking, offensive, and disturbing. It is important to recognise that this is overwhelmingly a ‘negative right’, meant to protect us from government interference, not a ‘positive right’ that requires anyone else to let us express ourselves as we see fit – we have no legal right to say whatever we like on Facebook, Twitter, or YouTube.

As long as they do not break other laws by, for example, systematically discriminating against us on the basis of what is typically called ‘protected characteristics’ like sex, race, or religion, they can pretty much remove or reduce anything they want when they want to, something governments, at least in principle, are not supposed to do.

The status quo provides some protection from government interference and some means to deal, however slowly, with individual cases of illegal speech.

But it clearly struggles to deal with misinformation and hate at great scale and rapid pace (defining features of digital media), struggles to deal with the risk that companies for commercial reasons or in response to political pressures restrict speech, struggles to deal with the grey zone of things that may well be misinformation or hate in a broad sense, but are not necessarily illegal, and struggles to deal with the fact that misleading propaganda and vile attacks on minorities and other groups are often a defining feature of political discourse that in large part play out online and through the news.

ADVERTISEMENTREMOVE AD

How Govts Use ‘War On Fake News’ As Pretext To Curb Free Speech & Access To Info

It is clear that with the surge in online falsehoods, misleading information, and hate seen in many countries, the predominant push now is to ‘do more’. But it is important to recognise that, while the most vocal current criticism of the status quo – including often from governments eager to add new tools to their arsenal – is that it is too permissive, human rights organisations and free speech advocates have long argued that we also have the opposite problem.

They have documented how governments, also in nominally democratic countries, are using very real problems of misinformation and online hate as a pretext to crack down on free expression and limit access to information.

Also, that the notice and takedown systems that feed into content moderation by big digital platforms risk creating further incentives for platforms to respond to organised, powerful interests by exercising ‘private censorship’ and over-blocking – and removing content that, on closer examination, is in fact legal and perhaps in the public interest, even if also sometimes problematic.

This is important to remember because it underlines how any answer to the question – “what can we do to limit the harm that different kinds of online misinformation and hate can cause?” – must also consider what, if any, harms the proposed remedies might cause, including harms to our fundamental rights.

The truth of the matter is that no one has figured out what to do.

ADVERTISEMENTREMOVE AD

Limiting Harm Caused By Online Misinformation: What Have Elected Representatives Done?

Despite years of intense public debate, elected officials in most countries have done nothing to change the rules of the game.

The 2016 US elections saw very significant problems, yet the 2020 US election are playing out with largely the same rules.

Politicians, election officials, and regulators have essentially done nothing, leaving it up to individual for-profit private companies to try to figure out their own response to online misinformation and hate, some of it coming from the governing party.

The situation in Europe is more complicated and have seen some small incremental steps, with the European Commission largely following the suggestions of the EU High Level Group on Online Disinformation, which recommended steering clear of direct content regulation in favour of indirect approaches focused on supporting independent journalism, fact checking, and media literacy, forcing the big platforms to be more transparent, and trying to bring different stakeholders together for a collaborative response.

Individual European countries have taken additional steps. France, for example, passed legislation in 2018 enhancing the role of the judiciary, so that a judge may decide to order platforms to stop the “deliberate, artificial or automatic and massive” dissemination of fake or misleading information online in the run up to an election on the basis of complaints from for example a political party or candidate.
ADVERTISEMENTREMOVE AD

How China ‘Controls’ Misinformation

Some governments have gone further. Where they have, civil society organisations and human rights groups have often been overwhelmingly critical, with the UN Special Rapporteur on Freedom of Expression frequently pointing out that the laws passed have very vague definitions of the problem they purport to address, suggested disproportional responses, and offered little in terms of due process. China is an interesting example here.

With little regard for international human rights and an unapologetic preference for government control, China has launched numerous ‘campaigns’ and ‘crackdowns’ against misinformation, and implemented various regulations supposed to prevent people from ‘distorting or falsifying news information’ and quash rumours. Nonetheless, misinformation and hate continues to circulate.

Some problems are simply hard to tackle head-on, even for governments who are happy to ban things and punish those who do not toe the line.

ADVERTISEMENTREMOVE AD

Should Govts Decide What We Say Online?

We clearly want more action against misinformation and hate online. Many possible responses will probably have to be indirect, and involve civil society, independent news media, fact checkers, as well as greater transparency – if necessary secured through stronger oversight and regulation – from platform companies especially around their content moderation practices, including what they remove and reduce and why.

Others may involve the government or the judiciary.

But the answer to the question of how much more actively one wants the government and the judiciary to be directly involved in day-to-day decisions about what we can say online rests in part on our confidence that any individual government actually respects free expression (and rejects hate) and on confidence that the judiciary in question is in fact independent of the government.

The unsatisfying status quo answer to the question “who should decide what we can say online?” is a messy reality where in democratic countries, politicians make the basic rules, courts enforce them, and individual private companies improvise, however clumsily and half-heartedly, their own further measures in line with their own business interests.

Governments in many countries are increasingly unhappy with this. Some of them prefer a much simpler answer to the question “who gets to decide what we can say online?” They say “we should!” That is the Chinese route, and some other countries seem drawn to it. It not clear that this in fact reduces online misinformation and hate. But it might achieve other ends.

(Rasmus Kleis Nielsen is Director of the Reuters Institute for the Study of Journalism and Professor of Political Communication at the University of Oxford. He served on the EU High Level Group on Online Disinformation. He tweets @rasmus_kleis. This is an opinion piece. The views expressed above are the author’s own. The Quint neither endorses nor is responsible for them.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Speaking truth to power requires allies like you.
Become a Member
×
×