Big Tech grilled by Congress, between fake news and free speech

by time news

The leaders of the major technology platforms are called upon to defend their work against online disinformation. Here is their version, the problems to be faced and the conflicts at the heart of the debate. The full video

Big Tech is in the sights of the American Parliament. In these hours the virtual hearing is taking place in which the leaders of Twitter, Facebook and Google – respectively Jack Dorsey, Mark Zuckerberg e Sundar Pichai – they will have to explain and defend their approach in the fight against online disinformation. The topic is hot due to the attack on Capitol Hill last January 6th, attributable precisely to the deluge of fake news conveyed by these services, but the debate is long-standing, well documented and quite murky. In short: how can disinformation be curbed without affecting freedom of speech?

The hearing assumes, correct, that Big Tech is not doing enough to combat the spread of disinformation. This also depends on how the system was designed. “Technology platforms maximize their reach – and their advertising revenue – by using algorithms or other technologies to promote and suggest content that increases engagement,” he writes Frank Pallone, at the head of the parliamentary committee that directs the work, in the opening statement. Therefore the most scandalous, provocative and extremist contents are promoted more. Which, moreover, can be “targeted” with incredible precision in the direction of the users most susceptible to them, thanks to the profiling technologies on which this model is based.

To be fair, the companies weren’t idle. Certain types of content (such as violence or pedophilia) have never been tolerated, but the pandemic has led them to remove misinformation related to Covid-19 more aggressively, broadening the scope of what was considered “harmful to public discourse” also to antivaccinists and deniers. From there to banning QAnon posts, it was a short step. And after the Capitol Hill events, also caused by this conspiracy ideology as well as by the former president Donald Trump, it has taken the exceptional step of blocking Trump himself.

The question is complex, there is no unanimous solution due to the different sensitivities in the field. But there is relative consensus that Big Tech needs to change its approach. How to do? In America, the legal core of the matter is embodied in Section 230 of Communications Decency Act, a law dating back to 1996 that makes “the provider of an interactive computer service” immune from having to take responsibility for content posted by third parties. In essence, therefore, companies like Facebook do not have “editorial” responsibilities. But many in Washington believe it is time to rethink the law.

From the testimonies released in recent days by Dorsey, Zuckerberg and Pichai we can trace the line of their defense. The Twitter founder believes that there is a “trust deficit” towards tech platforms, to be bridged with more transparency and control over the algorithm by users. Dorsey is also experimenting with Birdwatch, a program that allows users to report a potential fake and add contextual information, and Bluesky, an independent team capable of creating open and decentralized standards for content. No mention, however, of Section 230.

Facebook is also experimenting with a sort of Supreme Court of content, the Facebook Oversight Board, which will decide on Trump’s suspension within a month. In the note sent to Congress, Zuckerberg highlighted the recent efforts of the platform in combating disinformation and proposed a variation of Section 230, in which the legal protection of the digital platform must be contingent on its ability to carry out the best practices to avoid the spread of fake. Translated: instead of judging the company based on the individual content that filters through the controls, one should judge the adequacy of its filtering system.

Mr Google, as well as Mr Facebook, emphasized the company’s attention to authoritative sources during the pandemic. As for Section 230, Pichai believes proposals to change or repeal the law could backfire on companies, harming both freedom of speech and the ability of the individual platform to fight disinformation. His solution, similar to Dorsey’s, is more transparency on algorithm dynamics.

The three face a heterogeneous front of parliamentarians, united only by the belief that something needs to be changed. Among Democrats there is a tendency to think that platforms should be empowered by changing Section 230, in order to force them to change algorithms or spend more resources on moderation (all three companies use external moderators). Among Republicans, however, there are fears that the adoption of shared guidelines could impact freedom of speech.

There is also a lot of friction between the two parties: conservatives believe platforms side with Democrats and censor more voices in the Republican area (theory that the data doesn’t support), while progressives want a more aggressive approach to content. violent and potentially harmful – an approach that, according to Republicans, would reduce freedom of expression.

The repercussions of the hearing will cross the Atlantic. The Digital Services Act being studied by the European Commission provides for very high penalties and auditor external for platforms, experts able to study the mechanisms and illuminate the root of the disinformation problem, that is the algorithms. In Brussels, therefore, they have their antennas pointed at Washington to see Big Tech’s next moves.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.