advertisement
(Note 1: On 18 October, The Wire restricted three of its exclusive reports on Meta's XCheck programme, as well as a preceding report on the takedown of a satirical Instagram post, from public view. This was done pending the outcome of an internal review.)
(Note 2: On 23 October, The Wire retracted these reports on the grounds of discrepancies in the material used. The review is still ongoing, it added.)
Whether or not an Instagram post that you’ve reported gets taken down may depend on whether you count as a high-profile user under Meta’s ‘XCheck’ or 'cross check' programme.
According to a purported internal company report accessed by The Wire, Instagram took down several posts reported by the ruling BJP’s IT cell head Amit Malviya – without verification.
Generally, Instagram relies on AI-driven technology and human content moderators to find, review, and take action against a post that has been flagged for violating its Community Guidelines.
With one of Meta's closely guarded secrets back in the news, here's what we already knew about XCheck, what The Wire report claims about XCheck, and what is yet to be revealed. What are the questions about XCheck that Meta should respond to regardless of the controversy?
While the system known as 'cross check' was first rolled out by Facebook in 2013, the code name become public only in 2018.
In its response to a report by UK’s Channel 4 News, Facebook had attempted to set the record straight on 'cross check'. The description of 'cross check' by Facebook in its response suggested it was meant to serve as “a second layer of review” of posts by prominent users, including news organisations, that had been flagged for violating Community Guidelines.
But that wasn’t the version of XCheck or 'cross check' revealed by The Wall Street Journal's report on internal company documents that were leaked by whistleblower Frances Haugen. In practice, taking down the posts of an XCheck user at times required the green light from Mark Zuckerberg, then chief operating officer Sheryl Sandberg, or senior executives in the company's communications and public policy departments.
The revelations by Haugen caught the attention of the Facebook Oversight Board – comprising independent members for reviewing policies and content decisions – which had reportedly been misled by the tech giant on the number of decisions that were impacted by XCheck. After Facebook was requested to appear before the Board, it sought the Board’s guidance on how to improve the system. The Board’s recommendations are still awaited.
After being called out for its lackadaisical response on XCheck by its own Oversight Board, Meta published a blog post that revealed the following:
The XCheck system of review is applicable to certain content that is either flagged by Meta's automated systems or human reviewers. This would mean that posts reported by users or entities would not be considered for additional review under the XCheck system.
It's not just the content posted by XCheck entities that would be subject to additional review. Enforcement actions against XCheck pages or user profiles could also be revisited.
Meta's business partners as well as pages, profiles, and groups with a large following could see their content reviewed more times than in the case of a normal user, in order to prevent incorrect removal of content.
According to Meta, the review processes is of two types:
General secondary review: This is a review of content from all users and entities that has been flagged by the 'cross check ranker' – a dynamic prioritisation system developed by the company to monitor and rank content based on its predicted reach or whether it is trending, among other factors.
Once this content has been sent for general secondary review, third party reviewers and folks from Meta's marketing department will decide if it should stay up, be taken down, or face other enforcement action.
Early response (ER) secondary review: The content that comes under this type of review is from "lists of users and entities whose enforcements receive additional cross-check review if flagged as potentially violating the Community Standards," Meta said in its post, hinting at the continuance of whitelisting practices.
Meta's markets team and then an "early response team" takes a look at the content that comes up for ER review to confirm if it violates the platform's policies and guidelines.
Out of the seven posts by satirical anonymous account ‘@cringearchivist’, two of them were supposedly reported by Malviya, and instantly taken down, for violating the platform’s guidelines on nudity and sexual activity content. The action was reportedly taken despite the posts not clearly featuring any sort of nudity. The other posts also flagged by Malviya were immediately removed for showcasing ‘extreme graphic violence.’
Furthermore, The Wire cited a source from within Meta as saying that a total of 705 Instagram posts in September were actioned against based on Malviya’s say-so.
Sweeping privileges: The Wire's report alleged that the privileges of being an XCheck user don’t stop at avoiding content moderation reviews. It could also include being able to report posts for immediate takedown by fully passing over Instagram’s oversight mechanism.
However, Meta has dismissed this as being the case. In a reply to a tweet, Meta’s policy communications director Andy Stone said, “X-check has nothing to do with the ability to report posts.”
“The posts in question were surfaced for review by automated systems, not humans. And the underlying documentation appears to be fabricated,” he added.
On 12 October, in its first official company response, Meta said that the allegations in The Wire report were false. "Our cross-check program does not grant enrolled accounts the power to automatically have content removed from our platform," it reiterated.
While Meta alleged that the reports contain "mischaracterizations" of its enforcement process, it did not reveal new information on how this process worked, especially on the removal of @cringearchivist's posts.
XCheck designed to avoid bad press: The Wire's report alleged that the XCheck system was set in place to avoid getting bad press if the platform took action against content posted by celebrity users. To be clear, earlier reports suggested that XCheck was intended as a cross-verifying mechanism but ended up providing content rule exceptions to whitelisted users.
XCheck is still active: While Stone had reportedly said that the practice of whitelisting users would be phased out, The Wire’s report suggested that XCheck privileges are being taken into account when a post is flagged by such a user.
XCheck system extends to India: The report has alleged that Amit Malviya being offered preferential treatment under XCheck is the first such case to come out of not just India but entire South Asia. Earlier, WSJ’s investigative report had revealed that 5.8 million users across the world were a part of XCheck.
1. Can Amit Malviya be a part of the XCheck list, as per Meta's criteria?
Though we can't independently verify if Amit Malviya is part of the XCheck list, it must be examined whether he meets any criteria laid down by Meta.
For the record, Malviya has less than 5,000 followers on Instagram.
2. Who can add names of users or entities to the XCheck list?
To this point, Meta has stated that "while any employee can request that a user or entity be added to cross-check lists, only a designated group of employees have the authority to make additions to the list." But, there's still a lack of clarity about the composition of the "designated group of employees."
3. How did the removed post on Yogi Adityanath's idol violate Instagram's guidelines?
One of the seven posts by '@cringearchivist' that was taken down was reportedly an Instagram story showing a man in front of an idol of Uttar Pradesh Chief Minister Yogi Adityanath in a temple. With both the idol and the man being fully clothed, why was the post taken down for violating Instagram's guidelines on content related to nudity and sexual activity?
It can be recalled that as per Meta, posts for XCheck review are first picked up by either its automated systems or human reviewers. If that is the case and the post was erroneously removed, why hasn't it been reinstated yet? Isn't the post's removal a case of over-enforcement? If so, why didn't XCheck kick in here?
4. Shouldn't data on XCheck reviews be disclosed in Meta's compliance reports mandated by the IT Rules, 2021?
Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, requires significant social media intermediaries like Instagram and Facebook to publish monthly compliance reports containing details of user-related takedowns.
5. If XCheck means additional layers of content review, how can a particular XCheck page, profile or entity be "exempted" from enforcement action, as stated in a previous whistleblower complaint to the US SEC, instead of enforcement action being overturned (or not) at the XCheck stage? Does this mean exemptions are pre-determined?
(Update, 13 October: This report was updated with Meta's official response.)
(Update, 19 October: This report was updated in light of The Wire's statement on suspending and subjecting its exclusive XCheck coverage to an internal review.)
(Update, 23 October: This report was updated after The Wire released a statement retracting its Meta reports.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)