ADVERTISEMENTREMOVE AD

Are Tech Platforms Doing Enough to Combat ‘Super Disinformers’?

In the current political economy, inaction (on mis/disinformation) is not an option for platforms.

Updated
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

(Editors note: The story was first published on 16 December 2020. It is being republished from The Quint's archives in the light of Twitter marking a tweet about an alleged Congress ‘toolkit’ by BJP spokesperson Sambit Patra as ‘manipulated’.)

On 2 December, Twitter labeled multiple tweets – including one by the head of the Bharatiya Janata Party’s IT Cell Amit Malviya – which included an edited video clip from the ongoing farm law protests under its Synthetic and Manipulated Media policy.

At the time many wondered whether this marked the start of a more interventionist role by the platform in the Indian context or if this application was a one and done.

Since then, there have been at least two more instances of the application of this policy.

ADVERTISEMENTREMOVE AD

First, a – now deleted – tweet dated 30 August by Vivek Agnihotri was labeled (archive link) for sharing an edited clip of Joe Biden. It can certainly be debated whether this action was made in the Indian context, because of the user, or in the context of the US, because of the topic.

Second, since 10 December, a number of tweets, examples of which can be seen here and here, misrepresenting sloganeering from a 2019 gathering in America as being linked to the current protests against the farm laws, have been labeled as well. This group included a tweet by Tarek Fateh.

The reactions to these actions by Twitter have themselves been polarised ranging from celebratory, ‘it is about time’, ‘too little too late’ to accusations of interference in Indian politics by a ‘foreign company’. 

The Repeat Super-Disinformer

Some of the accounts affected have large follower bases and high interaction rates, giving them the ability to amplify content and narratives, thus becoming ‘superspreaders.’

A Reuters Institute study on COVID-19 misinformation found that while ‘politicians, celebrities and public figures’ made up approximately 20 percent of the false claims covered by it, these claims accounted for 80 percent of the interactions.

They are also not first time offenders, thus making them ‘repeat disinformers.’ It should be noted that these are also not the only accounts which routinely spread disinformation.

Such behaviour can be attributed, in varying degrees, to most parts of the political spectrum and therefore it is also helpful to situate such content using the framework of ‘Dangerous Speech.’

This combination creates a category of repeat super-disinformers that play an out-sized role in vitiating the information ecosystem at many levels.

First, directly by sharing content or messages targeting individuals or communities.

Second, the tactics used involve making outright false claims, which can be relatively easily debunked, or employing a potent mix of true, false and subjective claims, which are designed to be difficult and/or time-consuming to verify leaving the act of fact-checking exposed to accusations of political bias.

Reactions to such claims being debunked or actioned by platforms vary from ignoring them outright, blaming ‘sources’ for providing inaccurate information to doubling down and invoking ‘freedom of expression’.

Third, even the act of responding or reacting to such claims is fraught, as repeat super-disinformers are typically backed-up by a ‘disinformation industrial complex.’

This results in further conflict as accusations and counter-accusations at opponents, fact-checkers are levelled. Escalation also occurs as the number of participants increases, aligning with allies and clashing with opponents.

The content itself can vary from drip feeding hate, dog-whistles to calls for violence that carry the risk of real-world harm in the form of fuelling hate, discrimination, and even violence.

Though the ‘online’ components of these conflicts often seem ephemeral as one merely makes way for the next, the damage to the information ecosystem builds up, just as pollution does – leaving it ripe for continued exploitation by bad-faith actors with repeat super-disinformer tipped spears.

ADVERTISEMENTREMOVE AD

Situating ‘Platform Inaction’

While platforms like Facebook exclude political speech from ‘fact-checking’ many other platforms do not.

Besides, all repeat super-disinformers are not ‘politicians’. Labelling represents a start, and allows for conversations around content moderation to move beyond the leave-up or take-down binary.

However, academics also caution that there isn’t yet a clear understanding of how labels actually impact the way people interpret or interact with content.

So, given the impact that these repeat super-disinformers can have, the question that springs to mind is why haven’t platforms been able to act against them definitively, yet.

To try and answer this question, it is important to consider two more, even if subjectively. Why are they acting at all now? Are they capable of acting effectively?

User-generated content platforms have had policies against misinformation and disinformation in place for years, though they have historically been wary of assuming the role of ‘arbiters of truth’ especially on ‘domestic political disinformation.’

The combination of COVID-19 pandemic and US elections along with their incremental responses to health misinformation and ‘domestic political disinformation’ led to increased pressure for them to intervene in more countries.

This means that there is a higher risk of arbitrary actions by platforms in response to outrage-cycles instead of a consistent, principled approach. Their incentives to act may also be affected by business and security considerations, as The Wall Street Journal’s reporting has indicated.
ADVERTISEMENTREMOVE AD

Can Platforms Act Effectively?

On the question of capability, the pandemic and US elections have shown that platforms can act if they choose to do so. For elections in Myanmar, Facebook even announced the use of an ‘Image Context Reshare’ feature that would warn users if they were about to upload  image content that is over a year old and may violate its policies on violent content.

Yet, there is plenty of evidence to suggest that their interventions are far from perfect. Even as research suggests that a large portion of content that mis/disinforms is reconfigured or out-of-context, platform actions fail to address this.

Little has been said or written about the ‘Image Context Reshare’ apart from two lines buried in a blog post so we don’t know how well, if at all, it worked.

The same Reuters Institute study referenced earlier also found that a number of false claims stayed up on platforms without any labels (59 percent on Twitter, 27 percent on YouTube and 24 percent on Facebook).

An analysis by Avaaz demonstrated that minor modifications allowed content that has already been labeled on Facebook to bypass the platform's actions, highlighting the brittleness of the underlying mechanisms.

In the US even after expansion of policies and stricter enforcement, disinformation continues to run rampant.

Specifically, de-platforming these repeat super-disinformers is also an option that is available. Enforcement in this regard has been inconsistent. It is also easily bypassed as users create replacement accounts or operate back-up accounts anticipating removal.

A heavy-handed ID verification approach could be tempting but would be myopic and have far-reaching effects.

ADVERTISEMENTREMOVE AD

The scale of these challenges only becomes more daunting once the need for actions at a global scale informed by local context along with robustness across multiple languages are taken into account.

For example, the time taken by platforms to label content by Donald Trump has gradually reduced, however, this is not going to be reproducible across over 100 countries and potentially several thousand repeat super-disinformers even with a substantial increase in resources.

How far we are today is captured by Facebook’s missteps with the EndSARS and Sikh hashtags from March and November.

There is an additional question to consider as well – should platforms be given this elevated role? And what impact will that have on the relative power relations between Governments, Platforms and Society?

In the current political economy, inaction is not an option for platforms. Which makes it all the more important to ensure we understand the basis for their actions and push them to course-correct when these actions threaten to shift the balance of power away from society.

(Prateek Waghre is a research analyst at The Takshashila Institution. He writes MisDisMal-Information, a newsletter on the Information Ecosystem in India. This is an opinion piece, and the views expressed are the author’s own. The Quint neither endorses nor is responsible for them. )

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: 
Speaking truth to power requires allies like you.
Become a Member
×
×