ADVERTISEMENTREMOVE AD

Facebook Failed to Remove Posts Targeting Minorities in India

The latest allegations put on Facebook suggests it continues to fail in removing content of hatred nature in India.

Published
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

Controversy has become synonymous with Facebook these days and this week itself we’ve seen multiple allegations being put on the social networking giant.

Now, another report from Buzzfeed has mentioned that Facebook has failed to remove hundreds of post, memes and photos among others from India, that are aimed at minorities like LGBTQ, and religious groups that were reported by human right researcher for more than one year.

Facebook has been unable to moderate content of political nature in the US and Europe, it has also been held responsible for instigating mass violence owing to regional differences in countries like Myanmar and Sri Lanka among others.

While it has repeatedly stated that it will be adding content moderators to make things better, this report suggests things are probably getting worse at its end.

ADVERTISEMENTREMOVE AD

The piece from Buzzfeed, citing another report, launched at the RightsCon conference this week, says that “Without urgent intervention, we fear we will see hate speech weaponized into a trigger for large-scale communal violence. After a year of advocacy with Facebook, we are deeply concerned that there has been little to no response from the company.”

Responding to the claims made in the report, a Facebook spokesperson highlighted the efforts made to remove and monitor content on its platform, and proactively detecting hate speech shared by different people.

But activists clearly believe that Facebook needs to put in a lot more effort, add more local moderators to its team, in order to stop the chaos spreading through its platform, which has been clunky and difficult to report, even today.

These concerns are palpable, as Facebook caters to over 300 million users in India, and we’re not even talking about its userbase from WhatsApp and Instagram.

Now putting this in context of the users present in the country, Facebook needs to ascertain clarity and an effective model to weed out issues that are likely to incite violence.

The Buzzfeed report further illustrates these concerns with a report presented by Equality Labs, a South Asian American advocacy group focusing on technology and human rights. The group shared posts on the platform, containing hatred content and wanting to test Facebook’s credentials, when it comes to removal of such material.

However, they observed that 93 percent of the posts reported to Facebook containing speech which violated Facebook’s own rules weren’t removed from the platform. Which clearly suggests that whatever Facebook has been claiming till now, has come to nought.

Also, the group was hoping that Facebook reaches out to the person/group posting much quicker than 48 hours, which is long enough to do large-scale damage.

And it’s not just Facebook that’ll be of concern to groups like Equality Labs. WhatsApp has been at the forefront of violence spread through fake news, which is easily shared via different groups, and by individuals that are hard to track because of its encryption methods.

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Speaking truth to power requires allies like you.
Become a Member
×
×