On Facebook and Instagram, “personalities” get special treatment

by time news

Is the universe of Facebook unequal? The supervisory board of Meta (the group’s new name) on Tuesday criticized the social media giant’s platforms for giving preferential treatment to problematic content posted by politicians, bosses, celebrities and other personalities. . This is not the first time that he has pointed out such shortcomings. “The council is concerned about the way Meta has put its economic interests ahead of content moderation,” said the entity described as independent, but funded by the company.

In her report, she calls for a “significant overhaul” of the double verification program called “cross-check”, to make it more transparent, more responsive and fairer.

Currently, when posts or images potentially violating Facebook or Instagram policies are flagged, they are promptly removed if they are deemed high risk and come from unknown users. But if their author is on the “whitelist”, this content remains online while it is examined more closely – a process that usually takes several days, and sometimes several months.

Remove risky content more quickly

This double-speed, “unfair” system has therefore “offered additional protections to the expression of certain users, selected in part on the basis of Meta’s economic interests”, details the report. Because of “cross-check”, “content identified as contrary to Meta rules remains visible on Facebook and Instagram, while they are spreading virally and could cause damage”, admonishes the supervisory board .

It recommends accelerating secondary reviews of content from personalities who may post important human rights messages, and removing high-risk ones pending the internal verdict. It also asks the company to publish the eligibility criteria to benefit from the program, and to publicly identify, on the platforms, the accounts of the users concerned.

The entity is made up of 20 international members, journalists, lawyers, human rights defenders and former political leaders. It was created in 2020 on the proposal of boss Mark Zuckerberg and is responsible for evaluating the Californian group’s content moderation policy.

Members launched a ‘cross-check’ review in October 2021, following revelations from whistleblower Frances Haugen, who leaked internal documents to the press and accused the company of putting ‘profits before security” of its users. Meta’s responses to the survey were “insufficient at times” and “his own understanding of the practical implications of the program leaves something to be desired,” the council said.

You may also like

Leave a Comment