FB Live Suicide in Gurugram: Why Zuckerberg Can’t Stop the Trend

Facebook says the onus is on us, the users, to identify violent content.  

Sushovan Sircar
India
Updated:
i
null
null

advertisement

A 28-year-old married man committed suicide by hanging himself from the ceiling fan in his house in Gurugram and posted the live video of his final moments on Facebook, police said on Tuesday, 31 July.

Facebook has an impressive 250-million user base in India, the highest in the world. According to police reports, 2,300 of those users watched Amit Chauhan on Facebook Live on Monday evening. Earlier in July, 2,750 watched a 24-year-old man in Agra live-stream his own suicide. For a company that prides itself on sophisticated algorithms that can make meaning out of millions of units of data in nanoseconds, it has struggled to find a relevant solution to the problem of suicides being live-streamed.

In an age where technology companies are betting on Artificial Intelligence (AI) for solutions, Facebook’s AI is not equipped to monitor live videos or images, and hence cannot understand if a user is speaking about suicide in a live stream. Since November 2017, it is able to scan for keywords using pattern recognition to identify suicidal content in posts and comments.

The social network has unveiled a host of tools to combat the problem. Live chat support from crisis support organisations through Messenger is one of them. In March 2017, Zuckerberg had deployed a mix of 7,500 human moderators, and integrated suicide prevention tools with Facebook Live and Messenger to help people in real time.

Tools available to a user once she has flagged a video with self-harm content.(Photo Courtesy: Facebook)

Who is to Blame - Facebook or Users?

The limited capability of its AI is a big reason for Facebook’s failure to keep such inappropriate content in check. Besides, despite all the digital tools at its disposal, Facebook relies heavily on its users to physically flag content. It is only after a user notifies Facebook, that its resources are deployed.

By doing so, Facebook essentially has outsourced the detection of suicide videos to people using the platform. Therefore, it has put the onus of determining what content is inappropriate on users. What this means is that before others are spared the horrific visuals of a live suicide someone has to actually witness it.

Therefore, someone among the thousands of people who witnessed Kumar’s suicide had to notify Facebook. Once a viewer flags a video, not only does Facebook get notified, but the viewer also gets access to resources to reach out to the creator of the video.

It is important to note that there is no evidence to suggest that anyone among the 2,750-odd viewers flagged the video – an act that would have notified Facebook and also let the viewer reach out to to Kumar. Psychologist Smriti Joshi describes this behaviour as “bystander apathy” – a socio-psychological phenomenon in which individuals are less likely to offer help when others are present.

“Social media is building walls instead of bridges. It provides a false sense of belonging but the normalisation of violence in mainstream media has further fed this apathy,” said Joshi, who is the lead psychologist of a therapeutic AI bot called Wysa.

Another pertinent question is whether the viewers were aware of the tools at their disposal.

A Tragic Trend

As Facebook continues to expand its digital footprint across India, the platform’s weakness in curtailing inappropriate content is becoming evident. The act of publicly ending one’s life through live streaming is emerging as a horrific trend across India with at least four other such instances reported in 2018.

Arjun Bharadwaj, 24, had checked himself into an upscale hotel in Mumbai in April, 2018. He started a live stream to talk about his suicide and jumped out of the window of his 19th floor suite. On June 30, Arindam Dutta, 43, a resident of Siliguri in North Bengal, hanged himself in a similarly public manner. Less than a month before Dutta took his life, another resident of Bengal, an 18-year-old woman hanged herself and live-streamed the act.

Facebook Live, launched in April 2016, has been a characteristically ambitious move by Zuckerberg. According to the social media giant’s own reports, the Live platform has seen over 3.5 billion broadcasts in two years. The daily average continues to double year on year.

With big numbers have come big controversies. Zuckerberg has been accused of facilitating fake news, data abuse, surveillance and hate speech. And now the California-based company is struggling with not just videos of suicides, but also instances of murder, bullying, child exploitation and sexual assault. In many cases, the videos were allowed to remain on the website for several hours before being taken down.

Facebook’s AI Paradox

So why hasn’t Facebook unleashed the might of its sophisticated AI in countering this problem?

The social network has decided not to use its algorithms in censoring violent content before they are posted so as to not be accused of violating users’ freedom of speech. On November 27, 2017, Zuckerberg, in a Facebook post, announced upgrades to its AI tools. He said Facebook’s AI uses pattern recognition tools to scan for terms like “are you okay” or “do you need help”. This proactive detection tool flags such phrases to trained moderators who can then reach out to the person.

However, the idea of Facebook’s AI foraging posts and messages in search of objectionable content raises serious surveillance concerns. But Facebook has also allowed such videos to continue playing because Zuckerberg felt that stopping the video would prevent the person from receiving help.

(Photo Courtesy: Facebook/Mark Zuckerberg)
ADVERTISEMENT
ADVERTISEMENT

Is Facebook Legally Responsible?

The short answer is ‘no’. Many have asked if Facebook shares responsibility for the deaths and other violent content on its platform. While its policies have been problematic, the social network is not guilty in such instances. It’s main immunity comes from section 79 of the Information Technology Act, 2000, which states that Facebook is not an enabling technology that aids suicides.

“Under Section 79, Facebook is an intermediary and not liable for the content hosted on its platform. Therefore, Facebook cannot be said to be causing the deaths,” said Raman Jit Singh Chima, Policy Director at Access Now. “If two people are speaking on the telephone and one of them issues a death threat, the phone company cannot be held responsible,” he added.

However, in a situation where Facebook has prior knowledge of an illegal act and fails to act, it can be held responsible. A lot of the content, although inappropriate, is not actually illegal.

Chima points to the larger issue of lack of transparency and accountability with regard to Facebook’s monitoring of content. “It is problematic to rely on technology-based solutions and allow privatised enforcement, which is controlled by a company and not by law,” he said. “Facebook functions with little transparency and we know very little about its algorithmic processes because of the absence of democratic accountability and judicial review process,” he added.

“We built this big technology platform so we can go and support whatever the most personal and emotional and raw and visceral ways people want to communicate are as time goes on,” Zuckerberg had said at the launch of Facebook Live.

He had, perhaps, not fathomed the real implications of this statement.

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: 16 Jul 2018,06:34 PM IST

ADVERTISEMENT
SCROLL FOR NEXT