advertisement
A series of documents leaked by former Facebook employee-turned-whistleblower Frances Haugen have revealed that the platform let hate speech and misinformation spread unchecked, despite being aware of it.
The documents, which are being called 'The Facebook Papers', detail multiple notes and studies conducted in India since February 2019, including one with a test account to check Facebook's recommendation algorithm and how it exposed users to hate speech and misinformation.
In the course of this article, we will try to answer the following questions:
What is hate speech? What's the platform doing to filter hate speech and misinformation? What has been the impact and are the adopted measures enough?
Facebook defines hate speech as any content that directly attacks people based on "protected characteristics, including race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity or serious disability or disease."
The “direct attacks” include any kind of violent or dehumanising speech, harmful stereotypes, statements of inferiority, disgust or dismissal, expressions of contempt, cursing and calls for exclusion or segregation, as explained in detail on Facebook’s transparency center.
Since 2016, Facebook follows a policy of “remove, reduce and inform” on its platform to counter misinformation.
What this means is that the platform removes content that goes against its policies, reduces its reach and distribution and informs other users with additional information or context, so they can decide whether they want to view or share said content.
For instance, Facebook has partnered with third-party fact-checkers (3PFC) around the globe to combat misinformation. In India, too, there are ten different fact-checking organisations that work in over 12 languages.
The Quint’s WebQoof is also one of Facebook’s third-party partners in India.
However, despite its partnerships and human moderators, Facebook’s fight against misinformation and hate speech has hit more roadblocks.
Reports have revealed that Facebook's policy decisions in India have been influenced by political actors.
Srinivas Kodali, a researcher with the Free Software Movement of India says that the current regulations in India don’t suffice.
"I don’t think that the current government, the majoritarian BJP government has any interest in fixing this because we know it benefits them," he added.
Kodali then added that at most, the Parliamentary committee on Information Technology would bring in Facebook for questioning, but the process would keep getting disrupted.
"Indian institutions are not in a state to look into this because they are actively being asked not to look into it," Kodali said.
An internal study, quoted in the Facebook Papers, looked at the amount of misinformation that Facebook took down with the help of its fact-checking partners.
There are several other challenges that the platform has been facing in India. As per the New York Times, of the 22 officially recognised languages in India, Facebook's AI is trained to work in only five of them. According to Facebook, it employs human moderators to work with the rest of them.
But this is still troublesome. Despite Hindi and Bengali being the fourth and seventh-most used languages on the platform, Facebook does not have effective systems to detect and combat inflammatory content or misinformation in these languages.
Systems to detect content inciting or promoting violence in these languages were put into place as recently as 2021.
Speaking to AP, a Facebook spokesperson said, “Hate speech against marginalised groups, including Muslims, is on the rise globally. So, we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
They also added that hate speech identifiers in Bengali and Hindi "reduced the amount of hate speech that people see by half" in 2021.
The newly disclosed documents also reveal that of the total budget earmarked by Facebook to tackle misinformation, 87 percent is spent in the United States – which is 10 percent of its market.
While platforms like Facebook say they are actively working to take down content that is likely to incite violence, promote enmity and hate, there are endless examples of how online hate has resulted in violence.
In February 2020, BJP leader Kapil Mishra openly called for violence against mostly Muslim protestors to clear them off roads in a video published on Facebook. Riots ensued within hours, resulting in over 50 deaths.
The video was taken off the platform only after it amassed thousands of views and shares.
The pandemic too, gave way to increasingly communal content on the platform. The Quint's WebQoof team debunked several such communal claims where social media users shared visuals or text, pinning the spread of the coronavirus on the Muslim community at the onset of the pandemic.
Similar trends have been noticed in other countries like the Capitol Hill violence in the US, which allegedly started with 'Stop the Steal' Facebook groups and resulted in pro-Trump mob storming the Capitol; Facebook posts targeting the minority Rohingyas in Myanmar which led to tensions and violence.
The 'Facebook Papers' revealed how the company had known about the multitude of issues that their platform caused – through multiple studies and over the years – and yet took little to no impactful action to address them.
Speaking to The Quint, Internet Freedom Foundation's Apar Gupta emphasised the need for Facebook to reallocate its funding when it comes to content moderation in its markets.
Referring to the revelation about just 13 percent budget devoted to all countries (other than the US) by Facebook to tackle misinformation, Gupta said:
He also spoke about the algorithmic processes that the company adopts to moderate hate speech, which Facebook terms as "AI" and claims that it takes down 90 percent of hateful content. Gupta noted that Facebook's internal reports revealed vastly different numbers.
"It only results in removal of 3 to 5 percent of hate speech, at best," he said.
A similar point is raised by senior journalist and researcher Maya Mirchandani in an article titled "Fighting Hate Speech, Balancing Freedoms: A Regulatory Challenge."
She notes that while AI and machine learning tools are being re-engineered and trained to monitor multimedia content on platforms, the volume of the content may get too much for the technology to handle, and that local contexts are "next to impossible for them to fathom."
However, as Facebook talks about upgrading its machine learning models and relying on AI for detection, the question is – why has the platform not been able to tackle the issue?
Prateek Waghre, a researcher at The Takshashila Institution, stressed on the importance of holding platforms accountable and added a note of caution that realistically, platforms should be held "accountable in a way that does not reduce the relative power of civil society."
"There are certain things that Facebook can do, in terms how much they invest in local resources in the country that they operate in," Waghre said.
He added that the challenges seen on Facebook were common on other social media platforms too. "The challenge is not just for one company, but it's about fixing these issues at a broader, societal level."
(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Published: 27 Oct 2021,09:00 AM IST