advertisement
A few days ago, The Wall Street Journal published a report alleging that Facebook India’s public policy director had exhibited ‘favouritism’ towards the BJP on a number of occasions, including by personally opposing the application of Facebook’s hate speech rules to its leaders. Similar allegations of bias have been levelled against other platforms as well. In February 2019, for instance, Twitter CEO Jack Dorsey had been summoned by the Parliamentary Committee on Information Technology following concerns around the platform discriminating against right-wing accounts.
While it is most important to protect our fundamental freedom of speech and expression against any onslaught from the State and even the judiciary, these recent incidents point to the urgent need for initiating a separate conversation on the merits and demerits of private regulation of online content by large platforms like Facebook and Twitter.
At present, online intermediaries, such as WhatsApp and Twitter, are exempt from liability arising out of user-generated content provided they exercise due diligence and take down content upon receiving ‘actual knowledge’ of its illegality. This framework, also known as the ‘safe harbour’ regime, has been an important factor behind the growth of the digital economy by ensuring that platforms do not have to invest in ‘pre-screening’ of user content. Yet, concerns around a variety of online harms such as fake news, hate speech, online harassment and obscenity, and the difficulties faced by law enforcement agencies in investigating and punishing cross-jurisdictional cyber crimes have given rise to complex policy questions regarding what the responsibilities of online platforms should be in an age where the public discourse and news cycle is primarily driven by social media.
As a result, the Ministry of Electronics and Information Technology had released the (Draft) Information Technology (Intermediaries Guidelines (Amendment) Rules, 2018 (Draft Rules) for public comments in December, 2018. These Draft Rules seek to impose a range of obligations on intermediaries, including requirements for larger intermediaries to incorporate in India and establish a physical office here; putting in place mechanisms to trace originators of information; enabling proactive identification and removal of illegal content, providing information and technical assistance to government agencies, etc.
While many acknowledge the need to address online harms, the Draft Rules have been criticised widely by civil society and industry, particularly on grounds that it could lead to overzealous implementation of proactive identification/removal of content obligations giving rise to unprecedented online censorship, compromise the privacy of users, and place disproportionate costs on businesses by imposing onerous obligations on all intermediaries irrespective of their size and scale. Amidst the rising heat around this issue, Facebook recently announced the membership of its controversial Oversight Board, which will be responsible for deciding upon some its most challenging content moderation cases.
The success or failure of Facebook’s new experiment remains to be seen.
Therefore, any proposed regulations in this space have to carefully balance the legitimate concerns around not restricting freedom of speech and expression online, as well as ensuring accountability from platforms regarding their private content moderation practices.
The current intermediary liability framework adopts a fairly broad definition of the term ‘intermediary’ and is applicable to all kinds of intermediaries irrespective of the nature of the intermediary or the service being provided by it. That is, it imposes the same duties on large social media and messaging platforms like Twitter and WhatsApp and a local cyber cafe. However, regulatory attention has focused on certain types of platforms in the context of specific online harms. For example, our research indicates that most online harm cases before courts revolve around large user-facing social media, e-commerce and messaging platforms.
In addition, any attempt to impose new responsibilities on intermediaries should be based on the type of activities being performed by them and the risks and challenges emerging from those activities.
Under the Information Technology Act, 2000, intermediaries are simply required to publish terms and conditions that inform users not to engage in illegal and harmful activities, and to notify users that violation of such terms may result in withdrawal of services. However, most major intermediaries publish detailed content moderation policies outlining the types of content that users are prohibited from posting on their platforms and the consequences for the same, and providing some mechanism for users to report content that violates these policies.
In a recent research paper, we highlight that these content moderation policies are often ambiguously framed.
In addition, the standards applied for blocking users or content are not always clearly mentioned even though most platforms allow an affected party to approach the platform for redressal. This lack of transparency can lead to censorship of legitimate speech or inconsistent application of content moderation policies by intermediaries.
For example, content moderation policies must provide a clear path to raise complaints, ensure appropriate timelines within which user grievances are responded to, provide reasoned decisions to users, and outline an accessible mechanism for appealing wrongful takedowns. Further, intermediaries must be required to publish detailed reports at periodic intervals disclosing the number and nature of accounts/posts taken down to ensure greater transparency around their content moderation practices. These measures will ensure that important democratic rights such as free speech are protected on platforms which are playing an increasingly large role in our personal and political lives.
(Faiza Rahman is a Research Fellow in the technology policy team at the National Institute of Public Finance and Policy, New Delhi. Faiza has also recently co-authored (along with Varun Sen Bahl and Rishab Bailey) a paper titled: Internet intermediaries and online harms: Regulatory Responses in India. She tweets @rahmanfaiza6. This is an opinion piece and the views expressed are the author’s own. The Quint neither endorses nor is responsible for them.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Published: undefined