advertisement
Google said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites.
The programming tool, called Perspective, aims to assist editors trying to moderate discussions by filtering out abusive "troll" comments, which Google says can stymie smart online discussions.
"Almost a third self-censor what they post online for fear of retribution," he added in a blog post yesterday titled "When computers learn to swear".
Perspective is an application programming interface (API), or set of methods for facilitating communication between systems and devices, that uses machine learning to rate how comments might be regarded by other users.
Many news organisations have closed down their comments sections for lack of sufficient human resources to monitor the postings for abusive content.
Google has been testing the tool since September with the New York Times, which wanted to find a way to maintain a "civil and thoughtful" atmosphere in reader comment sections.
Twitter said earlier this month that it too would start rooting out hateful messages, which are often anonymous, by identifying the authors and prohibiting them from opening new accounts, or hiding them from internet searches.
Last year, Google, Twitter, Facebook and Microsoft signed a "code of good conduct" with the European Commission, pledging to examine most abusive content signalled by users within 24 hours.
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)