advertisement
Twitter Inc moved swiftly to remove posts from Islamic extremists glorifying a truck attack in Nice, France, watchdog groups said Friday, in a rare round of praise for a platform that has often struggled to contain violent propaganda.
A spate of violence over the past several months has posed numerous challenges to social media companies.
US and French authorities on Friday were still trying to determine whether the Tunisian man who drove a truck into Bastille Day crowds on Thursday, killing 84 people, had ties to Islamic militants.
At least 50 Twitter accounts praising the attacks used the hashtag Nice in Arabic, according to the Counter Extremism Project, a private group that monitors and reports extremist content online. Many accounts appeared almost immediately after the attack and shared images praising the carnage, the group said.
The pattern was similar to what was seen on Twitter after attacks last year and earlier this year in Paris and Brussels. Twitter, which once took a purist approach to free speech but has since revised its rules, took action much more quickly this week.
Rabbi Abraham Cooper, head of the Simon Wiesenthal Center’s Digital Terrorism and Hate project, also said Twitter had responded with unusual alacrity.
Twitter did not provide any information about account suspensions, but said in a statement that it condemns terrorism and bans it on its site.
Twitter, Facebook Inc and other internet firms have ramped up their efforts over the past two years to quickly remove violent propaganda that violates their terms of service.
Both companies continue to face major challenges in distinguishing between graphic images that are shared to glorify or celebrate attacks and those shared by witnesses who are documenting events.
Facebook’s “community standards” dictate what types of content are and are not allowed on the platform. Those standards explicitly ban “terrorism” and related content, such as posts or images that celebrate attacks or promote violence.
Yet the company’s policies around graphic images are more nuanced. Facebook, like most large internet companies, relies on users and eagle-eyed advocacy groups to report objectionable content to teams of human editors, who then review each submission and decide whether a post should be deleted.
At Facebook, those reviewers receive more specific guidance beyond the public community standards when it comes to deciding what to do with reported graphic images, a spokeswoman said. But she declined to elaborate on the company’s criteria.
Facebook said in a blog post last week said:
Internet companies have continually updated their terms of service over the past two years to establish clearer and in many cases stricter ground rules on what content is permissible on their platforms.
In response to pressure by US lawmakers and counter extremism groups, Facebook and YouTube have moved recently toward implementing some automated processes to block or rapidly remove Islamic State videos and similar material.
That has not stopped Islamist militants from celebrating attacks online and even updating their tactics. Some Islamic State supporters used Twitter hashtags that were trending globally to celebrate the Nice attacks, such as #PrayForNice, #NiceAttack and #Nice, so that their tweets were shown to a wider audience, according to screenshots from the Wiesenthal Center.
(Article shortened for clarity.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)