advertisement
Almost two weeks after Twitter’s board accepted a $44-billion bid from Elon Musk, currently the richest man in the world, expert opinions abound on what this could mean for activism online and heightened wealth inequality. All are undoubtedly right, and all highlight the perils of living through late-stage capitalism, where one man may get to have a near-unfettered hand in deciding what more than 330 million people in the world get to say via Twitter. To be fair, this is not a drastic change from the existing way of governing social media platforms, but we will discuss this subsequently.
There is very little to say something about this deal that has not been said already. My intention, therefore, is not to introduce any new opinions into this discourse, but rather to highlight existing threads of social media governance and censorship that seem to converge around the prospect of the world’s richest man buying one of the most prominent social media platforms in the world.
Rather, if his position as a ‘free speech maximalist’ is to be believed, the pre-existing problems of the social media platform would only exacerbate.
Today’s incumbent social media platforms, such as Twitter (and Google and Facebook), looked vastly different at the beginning of the millennium, and had vastly different priorities and outlooks towards speech. Each of these platforms was founded in Silicon Valley, was rooted in a strong libertarian approach towards users’ freedom of speech, and emulated an ‘American’, First Amendment model in deciding what speech gets restricted. The First Amendment accords expansive protection to the constitutional right to freedom of speech and expression, with very limited restrictions. Contrast this with India’s constitutional framework, which has in-built restrictions on a citizen’s right to free speech.
This America-centric approach to free speech, however, would certainly not be congruent across the different regional markets that these social media platforms sought to expand to. As a result, they often ran into trouble with governments from different jurisdictions, which took exception to the content posted on these platforms, including in Thailand and Turkey.
Despite these challenges, the overall approach to platform regulation was laissez-faire. Characterised as an ‘era of Rights’, the period from the early 1990s to the 2010s was one in which regulators largely sought to immunise social media platforms from legal liability arising out of the content posted by its user. Section 230 of the Communications Decency Act (CDA), in the USA, Article 14 of the E-commerce Directive in the European Union, and section 79 of the Information Technology (IT) Act in India, all reflect this approach, albeit with regional variations.
The narrative could be seen with Twitter as late as 2012, when the then-management characterised the platform as a “technology company that is in the media business”.
The overall result of this has been that today’s platforms are built on business models that make it very difficult to control how their products are being used. These platforms, and the ease of communication introduced by them, have been increasingly weaponised by the loudest voices in society to sway public opinion towards their favour and hold on to political power.
Pressured by both governments and civil societies, these platforms have attempted to respond to these challenges, by moderating and censoring content that violates both their own norms of governance — colloquially known as ‘community standards’ — as well as the local laws of the country in which they operate. And much like the stories of Turkey and Thailand, the question of what’s censored and what gets to stay up continues to be a thoroughly political affair, contested over historical, cultural and societal identities and boundaries.
And platforms have continued to get it wrong.
In 2020, during the raging first wave of the COVID-19 pandemic, Twitter did something unprecedented – it added a fact-checking label on the tweet of the then-President of the US, Donald Trump. This was only the beginning of a months-long back and forth between Trump and Twitter. Trump threatened (via an executive order) to rescind the status quo immunity usually enjoyed by platforms like Twitter, arguing that such arbitrary exercise of Twitter’s power was undemocratic and a threat to the right to freedom of speech and expression.
Donald Trump had spent entirely all his time as the POTUS promoting baseless conspiracy theories and disinformation. He had only doubled down on these narratives during the pandemic and in the days building up to the presidential elections. For Twitter, which had begun to roll out policies about how it is choosing to deal with information related to the pandemic and civic integrity processes, it only seemed natural that once Trump’s tweets violated these policies, they would be dealt with in the manner detailed in these policies. And yet, when Twitter did take action, this felt like a watershed moment. From a largely laissez-faire approach to user content, the platform had gone on to de-platform the incumbent POTUS, arguably one of the most powerful men in the world.
Let us come back to India. On 26 October 2019, Twitter suspended the account of senior advocate of the Supreme Court, Sanjay Hegde, because he shared the famous photo of August Landmesser refusing to do the Nazi salute in a sea of a crowd at the Blohm Voss shipyard. This apparently went against Twitter’s ‘hateful imagery’ guidelines.
Incensed by what he believed to be Twitter’s arbitrary exercise of power (and rightly so), Hegde took Twitter to the Delhi High Court, arguing that such suspension violated his right to freedom of speech and expression.
I have previously written about the relationship between Trump and the Hegde case, and at the risk of repeating myself one more time, it is worth considering the fascinating convergence in the issues highlighted by the two cases, despite the vastly contrasting circumstances.
Twitter was probably right in suspending Trump’s account and was probably not right in suspending Hegde’s account. However, the unease in these two cases has not been due to their core merits but rather the obscured and opaque ways in which Twitter made its decisions in either case. And in this, neither Trump nor Hegde’s contentions about the apparent arbitrariness of Twitter’s action are unfounded. Why did it take Twitter four years to respond to Trump’s tweets?
The photo of August Landmesser is a historically important photograph, used as a symbol of defiance in the face of authoritarianism; what exactly was it about the post that violated Twitter’s ‘hateful imagery’ guideline?
The absence of these answers is what makes governance of speech across these incumbent platforms such a tricky issue. As Evelyn Douek notes, “If Twitter enforces its policies in an ad hoc manner [...] it opens itself up to charges of arbitrariness, questions about its motives, and tweet-by-tweet reevaluation of its role in the public discourse.”
At that moment, it was only a matter of luck that the leadership atop, the people who get to decide what 330 million users get to say on Twitter, seemed to side with common sense, that disinformation during an ongoing health emergency and in the run-up to a critical electoral process was dangerous to the overall democratic fibre of a country.
But now that Elon Musk, a man whose political ideologies are unascertained, has acquired Twitter, it might seem like this luck has run out.
Elon Musk stylises himself as a free-speech maximalist. He wants Twitter to be “politically neutral” and promote free speech as the bedrock of a functioning democracy. While it is still not clear how each of these mandates will become operationalised, many have pointed out that this is only a signal that the platform’s mis/disinformation, trolling and toxicity problem is going to get worse. Implicit in the maximisation of ‘free speech’ as a virtue of this ‘digital town square’ is the belief that content moderation – even the imperfect, opaque form that the current Twitter status quo has carried out – is essentially contrary to the values of free speech. And this is simply not true.
Moderation, as James Grimmelmann notes, is a common characteristic across all healthy online communities (like Wikipedia, for instance). Just as town meetings have moderators to keep meetings civil, online communities need moderation to “structure participation in a community to facilitate cooperation and prevent abuse”. Without systematic moderation then, a community would be overrun by its loudest voices, driving nearly all of the legitimate discourse away from the platform.
There is a very easy way to demonstrate why we need moderation on online platforms. In India, reasonable restrictions on the constitutional right to freedom of speech and expression can only be imposed by the government.
Without any moderation, we would continue to see coordinated harassment and silencing tactics against anyone who does not bear the mark of the majoritarian identity.
This is the problem with Musk’s free speech maximalism: whose free speech would be maximised? Who would stand to benefit from this political neutrality?
For vulnerable communities, such as caste-oppressed minorities, whose existence online is in itself often enough for them being subjected to abuse and hate, this tolerance and neutrality to ‘criticism’ is meaningless. The ‘absolutist’ approach to free speech, reminiscent of the social media platforms’ earlier days, would not allay the existing troubles of these platforms. The weight of neutrality and tolerance, in the face of abuse, would not be borne by the whole community. Rather, as Mari Matsuda notes, it would be a psychic tax, borne by those who are least able to pay it.
The bottom line here is that there is a good chance that Twitter’s processes, which were opaque and seemingly arbitrary, to begin with, would have very little chance of being improved under Musk’s reign. Rather, the concern is that whatever it is that the existing moderation processes had achieved might be neutralised at the altar of supposed free speech.
And very pragmatically, Musk’s previous conduct does not evoke the confidence that he will be accommodative of these concerns shared by historically marginalised communities across the world.
Today, both social media platforms and billionaires occupy such a significant corner of our modern, shared consciousness, that it is difficult to imagine any public conversation that does not ultimately invoke their presence. These entities and individuals have an overwhelming amount of power over something as simple and fundamental as our right to speak our mind online, and not get threatened, silenced or abused for it. The discussion above should thus be taken to be less about specific billionaires or specific social media platforms, and more about the sheer concentration of power amassed by any entity or individual discharging an important public function.
Even if we assume that under Elon Musk, Twitter would not become a place overrun by trolls, the uncertainty about just how the platform would end up looking is part of the problem. With Zuckerberg controlling Facebook and Musk controlling Twitter, that is two billionaires controlling four of the most important social media platforms of modern times. The fact that we know little about the processes that go behind these platforms would continue to be a dangerous facet of our modern life.
(Torsha Sarkar is a researcher at Centre for Internet and Society. This is an opinion article and the views expressed are the author's own. The Quint neither endorses nor is responsible for them.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Published: undefined