How many times have you clicked on the ‘share’ icon of a post without thinking about what you will amplify on social media platforms? Not everything that is online is true. Yet, we often share content without judging its accuracy, unwittingly contributing to misinformation.
But, what if there was a way to make us think before we hit ‘share’? Could it help tackling the menace of false information?
In this article, we will look at whether artificial intelligence (AI) can help in curbing the spread of misinformation on social media.
A study conducted by the Massachusetts Institute of Technology (MIT) and Google’s social technology incubator Jigsaw has found that simple changes to the user interface of social media platforms can combat misinformation. This experiment, however, was limited to COVID-19.
The user interface interventions include introduction of prompts via a pop-up window, in order to make people pause, think and analyse the content they are about to share. By doing so, people were 10 percent less likely to share misinformation.
Does Accuracy Impact Our Sharing Behaviour on Social Media?
The study, which involved 9,070 American social media users, found that gender, race, partisanship, and concern about coronavirus did not moderate effectiveness, thereby hinting that the prompts would be helpful for different demographic subgroups.
Furthermore, we found that did not moderate effectiveness, suggesting that the accuracy prompts will be effective for a wide range of demographic subgroups.
The study also showed that prompts were more effective for participants who were “more attentive, reflective, engaged with COVID-related news, concerned about accuracy, college-educated, and middle-aged".
Participants were shown different combinations of headlines, images, and sources related to COVID-19 and were asked to judge their accuracy. It was found that the true headlines were much more likely to be rated as accurate as compared to false headlines.
However, the results were strikingly different when a separate group was asked to share the same set of headlines.
The graph above clearly shows a significant difference in judging the accuracy levels of true and false headlines. BUT, when it comes to sharing, veracity does not seem to have a significant impact. The same people who could tell true news from false news, chose to be less discerning when it came to sharing content on social media.
It is this almost unconscious social media behaviour that often plays into the hands of those spreading disinformation and fake news via social media. And this is where AI and prompts could help.
‘People Forget to Consider Accuracy’
Speaking to The Quint’s WebQoof team, David Rand, MIT professor and co-author of the study, said:
“The data suggests that when people do stop to think about whether the content is accurate or not, they typically choose not to share content that seems false; but often they forget to even consider whether its accurate or not, and instead think about other things (for example, whether it aligns with their politics)."
Impact of Prompts on Sharing Intentions
The study also assessed the impact of different kinds of accuracy prompt interventions on sharing intentions of true and false headlines:
Evaluation: In this approach, participants were asked to assess the accuracy of a single non-COVID-related headline, so as to be alert about the accuracy when they shared content.
Importance: Participants were asked about how important it is for them to share only those news articles that are accurate.
Tips: People were provided with four simple digital literacy tips and were asked to: “Be skeptical of headlines. Investigate the source. Watch for unusual formatting. Check the evidence.”
Partisan Norms: Participants were told that both Republicans and Democrats were of the view that it was “very important” or “extremely important” to share only accurate news online.
'Read Before Sharing' Prompts on Twitter
Meanwhile, social media platform Twitter prompts users to read an article before they retweet it, which Rand, thinks in some way does help people to think about what they are going to share.
Another research paper co-authored by David Rand and Gordon Pennycook, mentioned that the correlation between cognitive reflection and disbelief in false information is stronger when the content is more obviously implausible.
“This suggests that, in cases where people actually do stop and think, relevant prior knowledge is likely to be a critical factor. Indeed, political knowledge is positively associated with truth discernment for political news content, as is media literacy and general information literacy,” the paper stated.
This means that reasoning may not enhance accuracy in cases where the prior knowledge is heavily distorted.
How Helpful Are Prompts, Practically?
So, practically speaking, how helpful will these user interface interventions such as accuracy prompts be, in making people think about accuracy?
“This remains to be seen. Before we can know the answer to a question like this, platforms would need to do extensive tests to figure out what form of accuracy prompt interventions were most effective for their users, and how big those effects are. But the experiments that we conducted, there’s reason to expect accuracy prompt to be helpful.”David Rand, MIT professor and co-author of the study
Then, what about fact-checks published by independent fact-checking organisations? Are these accuracy prompts more effective than them?
“It depends on what you mean by ‘more effective’. Fact-checks are quite effective at reducing sharing of headlines that are tagged with warnings. However, the problem with fact-checks from fact-checking organisations is that they are not scalable — it’s impossible for fact-checkers to keep up,” Rand pointed out.
Mentioning the benefit of the accuracy prompt approach, he said that it helps in reducing the sharing of false information by getting people to “think for a minute about whether the news is plausible, without needing specific articles to be identified by fact-checkers. So, I see the two approaches working together”.
Artificial Intelligence & Misinformation
In case of rapid disinformation attacks, where the aim is to create an immediate uncontrollable effect, attackers use artificial intelligence to create several social media accounts. And to avoid detection by any software, they often vary the content or wordings of a post.
In some instances, various authentic accounts engage with false ones by commenting on posts spreading disinformation, thereby unintentionally playing a role in promoting the propaganda.
A study published on The Conversation found that AI-generated false information is convincing enough to trick cybersecurity experts, who are otherwise fully aware of the different kinds of such attacks and vulnerabilities.
An article published in November 2020 on The Brookings Institution pointed out that the difficulties that are caused due to online disinformation created for a short-term effect, are not new.
“What has changed is the sophistication of the tools that can be used to launch disinformation campaigns and the reach of the platforms used for dissemination,” it added.
AI as a Measure to Detect False Information
While AI is being used to create disinformation campaigns, many are using the same intelligence to combat the menace of fake news.
In 2020, Google’s Jigsaw research team developed an experimental platform ‘Assembler’ that could help journalists and fact-checkers identify manipulated media.
Microsoft, too, developed a tool called ‘Microsoft Video Authenticator’ in an effort to detect deep fakes, which are images, videos or audio manipulated by making use of AI.
Social media giant Facebook has been using AI to scale the work of human experts and deal with the misinformation problem.
The company has been adding warning labels to content rated by third-party fact-checkers, of which The Quint’s WebQoof team is a part. And in order to scale these labels, FB has developed AI technologies to “match near-duplications of known misinformation at scale”.
In a recent blog post, Facebook scientists said that they have developed an artificial intelligence software that would help in figuring out how deep fake content was made and its origin.
Challenges Posed by AI in Tackling Fake News
An article published on MIT Computer Science & Artificial Intelligence Lab (CSAIL) in 2019 pointed towards a particular bias inherent in AI.
“Our stereotypes, prejudices, and partialities are known to affect the information that our algorithms hinge on. A sample bias could ruin a self-driving car if there’s not enough night-time data, and a prejudice bias could unconsciously reflect personal stereotypes. If these predictive models learn based on the data they’re given, they’ll undoubtedly fail to understand what’s true or false,” the article added.
Further, the abundance of different kinds of misinformation and the various contexts in which it is shared online, “makes it tricky for AI to operate on its own, absent human knowledge,” an article on MIT Technology Review noted.
(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at webqoof@thequint.com and we’ll fact-check it for you. You can also read all our fact-checked stories here.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)