The great challenge of Elon Musk’s Twitter

by time news

The first time that Marcelino Madrigal, a cybersecurity expert, had his Twitter account closed, back in 2013, it was because he denounced «more than 10,000 accounts related to child pornography in a year,” he recalls. He had already happened to him before with other networks. »You told what was happening and they kicked you out, that way the problem ends. I have all the profiles that were suspended saved so that people can see that I was not breaking any rules, I was simply reporting what was happening. It was a huge problem«. Years later, he stresses, Twitter itself has incorporated into its transparency reports the fight against this scourge. In the last semester of 2021 alone, almost 600,000 accounts related to this crime were suspended. This is, according to the platform itself, one of the most frequent reasons for intervening profiles, along with abuse and harassment and incitement to hatred.

In fact, this internet Quixote saw his account suspended again in 2019, when denounced that the reporting system was vulnerable, since accounts could be reported by impersonating or hiding the identity of the complainant. He had found one of the great Achilles heels of the platform and one of its most discussed aspects: its moderation system and how it affects the difficult balance between thetruthfulness, the right to honor and freedom of expression.

Elon Musk himself has put the debate back on the table by announcing, after buying Twitter – for no less than 44,000 million dollars – that he had “freed the bird”. “The company has seen a massive drop in revenue due to activist groups putting pressure on advertisers, despite the fact that nothing has changed with content moderation and despite the fact that we have done everything we can to appease the activists. They try to destroy freedom of expression,” the billionaire tweeted yesterday. On the other hand, the owner of Tesla and Space X has tried to reassure advertisers by saying that Twitter is not going to be “a hell where you can say anything without consequences.” And with an election just around the corner, he has insisted on the message from his security chief, who states that 80% of the moderation team’s work has not been affected by the thousands of layoffs carried out on Friday. .

“Friendly Humans”

Officially, on the controversial subject of moderation, Twitter simply points out that it has a support team made up of “likeable human beings from all over the world.” These would be the ones in charge of reviewing the complaints made by the rest of the Internet users. “We also use technology that helps us proactively detect and flag tweets that break our rules, before you need to report them.” So far, the ‘app’ has not revealed information about the specific process that these “nice” humans to decide the punishment that the alleged offender deserves. It is not even entirely clear that they are the main actors in the moderation process. “What we believe is that the algorithm is the first to filter out inappropriate content. Then, theoretically, this happens to be reviewed by people. There are many cases in which it has been shown that the machines have ended up making big mistakes and deleting publications that did no harm”, explains Laura Cuesta, professor of Cybercommunication and New Media at the Camilo José Cela University.

3.700
employees

That’s the number of workers Musk plans to lay off, roughly half the workforce. only 15% would belong to moderation and security

Cuesta, moreover, points out that the cases in which Twitter takes action against harmless content «in which the Internet user simply uses irony, but which end up being interpreted as incitement to hatred or violence». Precisely, last February, the little bird’s social network had to back down after his algorithm began to suspend accounts in which the war in Ukraine was being reported.

The problem is that moderation tools, designed to be used for the common good, have become in practice a “weapon” in the ideological struggle of some groups against others, says Madrigal. “The system has to have transparent rules and algorithms and a decent appeal system. Until now this has worked poorly, it has been slow and arbitrary,” summarizes Mariluz Congosto, a professor at the Carlos III University, who regrets the evolution that Twitter has experienced, which in 2011 was the network of social demands and today is fertile ground for “disinformation and intoxication” campaigns.

Boycott the algorithm

Monsieur de Sans-Foy knows this well, a “conservative and not very docile” profile, who after ten years on Twitter denounces that he was the target of one of the “packs” that enjoy “organized ideological persecution«. »I don’t insult, irony is mine and I’m not a hostile profile, so I don’t think they could confuse me with a troll«, she regrets. But little by little, complaints and small sanctions of days began to arrive. For one in which he sarcastically replied something similar to »take bromide« he was accused of »inciting suicide«. When a user reproached him for using a pseudonym and he replied that instead of his face he could »show him his ass« they accused him of »harassment or abuse«. Until he was definitively expelled and he decided that he would not return. “The little bird of Spain is not neutral,” he laments indignantly.

«In Spain there is a group of users who organized themselves in a Telegram channel called RedBirds, very ideological, which came to have Stalin’s face as an avatar, which set goals for people who considered that they should not have a voice on social networks and They denounced them en masse”, summarizes Guadalupe Sánchez, a lawyer and Twitter user, who also had her account withdrawn, unverified but with more than 64,000 followers, for two tweets: one that contained the word panties and another that sent some pizzas “at the stake”

“Twitter cannot censor under the guise of creating safe spaces. The limits of freedom of expression are set by law.

She, instead of deleting the ‘forbidden’ tweets, sent a burofax to the company denouncing that he was being subjected to a campaign of harassment. And the company returned the account assuming that there had been “an error”. He is now preparing a complaint against Redbirds in which he has also involved Twitter. “We understand that the behavior of this group constitutes harassment and that it has been committed using the instruments provided by Twitter, with which we have had formal communications on several occasions and that it has not taken any action,” says Sánchez, who considers “worrisome” the debate on freedom of expression on social networks: «A company cannot censor under the pretext of creating safe spaces. The limits of freedom of expression are set by law and interpreted by judges.

Congosto, which has been analyzing Twitter since 2009, maintains that, until 2017, it was «more tilted to the left«, because the people associated with this ideological current were more active, »not so much because there was a bias as such«. But in 2018, he declares, the balance began to balance with the appearance of right-wing accounts that began to function in a very similar way. Now, the extremes are becoming more and more alike.

Beyond the ideological battles, what is documented is that the platforms, in general, have been more permissive with certain users. Exactly one year ago, ‘The Wall Street Journal’ shared internal Facebook documents that showed that Mark Zuckerberg’s social network acted in a more lukewarm manner with the infractions carried out by his most influential users. Likewise, an investigation by ‘Forbes’ pointed out that TikTok would be more permissive with those accounts that have at least five million followers. The app later denied it.

To the letter

In any case, for a private company the first thing is always the economic criteria and, later, the legal ones. Or put another way, avoid penalties. “And finally there is freedom of expression, because if you do it wrong you can lose reputation«, says Ofelia Tejerina, lawyer and president of the Association of Internet Users of Spain. The jurist points out, however, that she has doubts about the functioning of the technology of the social network and about the biases of the human team in charge of making decisions. »?Who is programming the algorithm for you to make decisions and what is the legal criteria you use to delete content and accounts?«, He asks.

Tejerina is also skeptical of the changes promised by Musk, a self-proclaimed “free speech absolutist.” A few days ago, the new owner and executive director of Twitter stated that the ‘app’ is working on the creation of a new moderation council “with diverse points of view” intended to advise the social network before it makes any decision related to content standards or reinstatement of accounts that have violated the platform’s standards. A regulation that, probably, will be altered.

According to ‘Financial Times’, the executive would have informed advertisers that he wants to establish several levels of moderation, something similar to the age rating of video games and movies. In this way, the user could choose that the content that may be more offensive is not visible. Precisely, this change is in line with what is proposed by the digital lawyer and former president of Red.es Borja Adsuara. “Just as there must be freedom of expression, the user must have the right to be able to choose to interact exclusively with those Internet users who are moderate and do not insult. Social networks should incorporate this possibility, but limiting themselves to the withdrawal of illegal content”, explains Adsuara.

Twitter headquarters in New York

EFE

Precisely, the EU has been working for some time on a new regulation, called the Digital Services Law, which will force the large platforms to begin to respond legally for the content that users pour into it. With the new regulation, which is expected to come into force next year, social networks and internet platforms will have less decision making when removing or maintaining online content.

In order for sites like Twitter to be able to fulfill their function and moderate content without limiting freedom of expression, the former president of Red.es points out that the ideal, in the case of the EU, would be to ensure that “among judges, who cannot be in the day to day, and social networks, which are, an independent body made up of professionals of information and freedom of expression is created»: «The objective would be to unify criteria and that these be the same for all social networks according to the laws. Thus, precedents will be set. The platforms will know what they have to remove and the users what they can post.”

You may also like

Leave a Comment