ADVERTISEMENTREMOVE AD

Will Twitter’s ‘Review Your Tweet’ Prompt Affect Users’ Behaviour?

Twitter is launching a new test that will enable its users to ‘rethink’ and ‘review’ before posting any replies.

Published
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

Microblogging platform Twitter is launching a new test that will enable its users to ‘rethink’ and ‘review’ before posting any replies.

According to the company’s announcement, Twitter will detect for any potentially harmful or offensive reply to someone else’s tweet, and accordingly a prompt will appear which will allow you to rethink and revise your response before posting.

A pop up message, “Want to review this before tweeting?” will appear if a user tries to post any offensive reply. Three options will be given to every user before posting the tweet: one to review the tweet and edit it, second to delete the tweet completely, and third to go ahead and tweet regardless of the alert.

Earlier, in August 2020 and May 2020, Twitter ran these tests on Android devices as well. However, this feature have not been launched yet as Twitter claims that it is still experimenting with it.

ADVERTISEMENTREMOVE AD

Impact on Users’ Behaviour

Twitter believes that by prompting users to re-assess their language, many disputes can be avoided. Sharing his thoughts on this, Internet Researcher Rajshekhar Rajaharia told The Quint, “This update seeks to focus on improving the overall health of the platform, and free it from any sort of cyber bullying or harassment. Youngsters will be the most benefited from the newly rolled beta test”.

Rajaharia believes that this update will give users a moment to rethink their actions. “Millions of people use Twitter everyday and we see people, especially youngsters, who abuse on the platform which makes it an unsecure place. Moreover, it is impossible for Twitter to manually detect these hate mongers, therefore this update will only make the microblogging platform a better place”.

Flagging Offensive Content

Twitter’s AI will automatically flag offensive language such as insults, strong or derogatory comments and hateful remarks.

A spokesperson for Twitter explained, “We made some changes to this reply prompt to improve how we evaluate potentially offensive language – like insults, strong language or hateful remarks – and we are offering more context for why someone may have been prompted,” the spokesperson said.

Twitter began testing this feature last year where it also took a dig at the relationships between authors and people who reply in order to avoid prompting replies that were jokes or banter between friends.

“We paused the experiment once we realised that the prompt was inconsistent and that we needed to be more mindful of how we prompted potentially harmful tweets. This led to more work being done around our health model while making it easier for people to share feedback if we get it wrong,” the spokesperson added.

Research Shows Comments Can be Misunderstood

In a recently conducted study by Facebook, it found that “misrepresentation was the sole reason causing more angst and argument amongst its users”.

The study surveyed 16,000 Facebook users and received response from its users about their intentions in writing comments on someone else’s post and also about their perceptions of comments that others had written.

Facebook explained that when a comment whose author intended to share a fact is misperceived as sharing an opinion, the subsequent conversation is more likely to derail into uncivil behaviour than when the comment is perceived as intended.

This analysis can be used by Facebook to build a system similar to that of Twitter to alert people to review the comments they’re about to make that might be misinterpreted, based on language signals.

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Speaking truth to power requires allies like you.
Become a Member
×
×