Members Only
lock close icon

Musk and X’s Unpredictability is Undermining Their Role in Democratic Discourse

His emphasis on free speech and reduced censorship, while appealing to some, has led to instability and confusion.

Subimal Bhattacharjee
Opinion
Published:
<div class="paragraphs"><p>Under Elon Musk's leadership, X has experienced an interesting transformation compared to the platform's previous management.</p></div>
i

Under Elon Musk's leadership, X has experienced an interesting transformation compared to the platform's previous management.

(Photo: Aroop Mishra/The Quint)

advertisement

On 31 August, a judgment by Brazil's Supreme Court was delivered by the single-judge bench of Justice Alexandre de Moraes to block access to the social media platform X (formerly Twitter) due to non-compliance with a court order regarding hate speech.

The suspension will be in effect until all court orders are fulfilled, fines are paid, and a new legal representative for the company is appointed in Brazil.

Under Elon Musk's leadership, X has experienced an interesting transformation compared to the platform's previous management. Prior to Musk's acquisition, Twitter’s leadership under CEOs Jack Dorsey and Parag Agrawal focused on a more traditional approach to content moderation and platform governance.

This included efforts to combat misinformation, enforce community guidelines, and manage political discourse with a degree of consistency. For example, during the 2020 US presidential election, Twitter implemented measures to label and fact-check misleading information, aiming to curb the spread of false narratives and maintain the platform’s credibility.

Elon Musk’s approach has been marked by frequent changes that reflect his vision of a more open, ie, less regulated platform.

Shift in Management Has Led to a Polarised User Base

Since Musk’s acquisition in October 2022, X has seen a significant relaxation of content moderation policies, which has led to a resurgence of hate speech and misinformation.

Musk’s emphasis on free speech and reduced censorship, while appealing to some users, has created an environment of instability and confusion. For instance, Musk's handling of the platform’s political content, like hosting Republican presidential candidate Donald Trump, has raised concerns about bias and favouritism.

The suspension of X’s operations in Brazil over compliance issues highlights the challenges of Musk's leadership style, contrasting sharply with the more methodical and stable approach of Twitter’s previous leadership. This shift in management has led to a polarised user base and heightened scrutiny of the platform’s role in democratic discourse, reflecting the broader debate about the balance between free expression and responsible content management.

Additionally, Musk's management style has contributed to scepticism about the fairness of the platform, as global communication critics perceive a bias in how different political viewpoints are treated.

The pronounced biases on X have further eroded user trust, with many feeling uncertain about the reliability of information shared on the platform. Allegations of favouritism towards certain political narratives have been fueled by Musk's own statements and actions, which some interpret as aligning with specific ideologies.

The lack of clear guidelines on algorithm transparency has only intensified these concerns, leading to fears that the platform may amplify extreme views while marginalising moderate voices. The ongoing legal battle in Brazil against X raises a pertinent question: is X’s unpredictability under Elon Musk’s leadership undermining its role in fostering fair and reliable political discourse?

Fairness is a critical aspect of democratic discourse, as it enables citizens to engage in free and open discussions, without fear of censorship or marginalisation. Social media platforms must prioritise fairness in their content moderation practices, algorithm design, and user engagement strategies.

ADVERTISEMENT
ADVERTISEMENT

What Social Media Platforms Could Do

One of the key challenges in achieving fairness on social media platforms is the issue of algorithmic bias. Algorithms are designed to push certain types of content over others, which can lead to the amplification of extreme views and the marginalisation of moderate voices.

For example, a study by the Wall Street Journal found that Facebook's algorithm was biased towards conservative content, leading to the amplification of right-wing views and the suppression of liberal voices.

Social media platforms must prioritise transparency and accountability in their algorithm design. This can be achieved through the implementation of open-source algorithms, which allow users to understand how content is being moderated.

Additionally, platforms must provide clear guidelines on algorithm transparency, ensuring that users are aware of how their content is being moderated and why certain posts are being promoted over others.

Content moderation practices must be fair, consistent, and transparent, ensuring that all users are held to the same standards.

For instance, prior to 2022, Twitter's content moderation practices have been criticised for being inconsistent and biased towards certain political ideologies. Twitter has implemented a new content moderation policy, which prioritises transparency and consistency. The platform has also established an independent oversight body, which reviews content moderation decisions and ensures that they are fair and unbiased.

Furthermore, along with algorithmic bias and content moderation, social media platforms must also focus on fairness in their user engagement strategies. This means creating an environment where all users feel welcome and included, regardless of their political beliefs or ideologies.

Platforms must also ensure that users are not subjected to harassment or intimidation, which can silence marginalised voices and undermine democratic discourse.

For example, Facebook has established a civil rights task force, which works to address issues of bias and discrimination on the platform.

Facebook has also implemented a number of other features, such as community standards and reporting tools, which enable users to report harmful content and ensure that it is removed from the platform.

(Subimal Bhattacharjee is a Visiting Fellow at Ostrom Workshop, Indiana University Bloomington, USA, and a cybersecurity specialist. This is an opinion piece. The views expressed above are the author’s own. The Quint neither endorses nor is responsible for them.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Become a Member to unlock
  • Access to all paywalled content on site
  • Ad-free experience across The Quint
  • Early previews of our Special Projects
Continue

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT