3 Facebook Memos Flagged Problem Content in India, Vice Prez Denied: Report

Two reports identifying hate speech and “problem content” were presented in Facebook between January-February 2019.

The Quint
Tech News
Published:
<div class="paragraphs"><p>Facebook began testing the changes in three cities through a tool called 'Quick Promote'.</p></div>
i

Facebook began testing the changes in three cities through a tool called 'Quick Promote'.

(File Photo: IANS)

advertisement

Reports and memorandums exchanged internally within social media giant Facebook between 2018-2020 raise concerns about polarising content and misinformation being circulated on the platform in India, The Indian Express reported.

Two reports identifying hate speech and “problem content” were presented in the company between January-February 2019.

A third report, presented in August 2020, had admitted that the social media platform's AI tools were unable to filter the objectionable content due to the inability to “identify vernacular languages,” as per the IE report.

One of the reports indicated that a test user's news feed had “become a near constant barrage of polarizing nationalistic content, misinformation, and violence and gore,” reported The Indian Express.

Chris Cox, formerly the vice president of Facebook, in a review meeting held in 2019, had, however, stated that she had found “comparatively low prevalence of problem content (hate speech, etc)” on the website.

Moreover, a Meta spokesperson denied the claim that alleged that Facebook's AI was not able to identify vernacular languages, as per the IE report.

The discrepancies between the reports of problematic content and the measures taken to deal with it has been revealed in the disclosures made by former Facebook employee and Frances Haugen.

(With inputs from The Indian Express)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT