‘Deepfakes’ as a weapon of disinformation and propaganda in times of war

by time news

2023-10-25 20:00:02

Los deepfakes (ultrafalse, in Spanish) are extremely realistic manipulated videos created using deep learning artificial intelligence (AI) techniques. In them, people are shown saying and doing things that never happened in real life.

Researchers from the Irish University of Cork (UCC) have analyzed nearly 5,000 tweets during the current war between Russia and Ukraine. The objective is, according to the authors, to evaluate the impact of these contents as instruments of disinformation and propaganda in times of war. The results are published in the journal PLOS ONE.

‘Deepfakes’ allow people to take over other people’s identities. When this manipulation is applied to world leaders involved in conflicts, they can be especially damaging.

John Twomey, research leader (University of Cork)

To better understand the damage that deepfake videos can cause, the team analyzed Twitter discussions about deepfakes related to the Russian invasion. The researchers used a qualitative approach known as thematic analysis to identify and understand patterns in the discussions, which included a total of 5,000 tweets, distributed in the first seven months of 2022 across the network that Elon Musk has been renamed X.

According to what he tells SINC John Twomey, researcher at the UCC Faculty of Applied Psychology and leader of the work, “advances in AI allow this type of videos to be produced more quickly than before. The deepfakes They are especially harmful because they allow people to appropriate the identity of public figures.”

Of the 5,000 tweets extracted, the authors excluded ‘spam’ content, so in the end they analyzed 1,231 tweets that were subjected to a thematic examination

Additionally, Twomey notes that while this practice “can be used for entertainment, it is most commonly used for defamation and abuse. When these public figures are involved in a conflict, this can be especially damaging.”

Fake peace and surrender announcements

In the work, of the 5,000 tweets extracted, the authors excluded the content spam, so in the end they analyzed 1,231 tweets that were subjected to a thematic examination. “This involves qualitative coding of themes,” says the researcher. “What we do is manually tag the relevant points of each tweetwe put them together and organized them into common themes that highlighted relevant and critical points about the data set,” he explains.

Some tweets overlooked the possible harms or had positive reactions to the deepfakes directed against political rivals, especially those created as satire or entertainment.

The Russian-Ukrainian war has been the first real example of the use of deepfake videos in war conflicts

Twomey indicates that the Russian-Ukrainian war has been the first real example of the use of deepfake videos in war conflicts. Among other examples, he mentions the video that showed Putin announcing peace with Ukraine or the hacking of a Ukrainian news website to display the message deepfake of surrender of the Ukrainian president.

The study found tweets that warned of the need to prepare for a greater use of this type of videosthey talked about how to detect them or highlighted the role of the media and governments to refute them.

Other tweets, however, suggested that deepfakes They had undermined the trust of users to the point that they no longer trusted any recordings of the invasion.

Promotion of conspiracy theories

There were also tweets that related these contents to the users’ apparent belief in conspiracy theories, such as that the deepfakes of world leaders were used as cover when they were actually hiding, or that the entire invasion was fake, anti-russian propaganda.

The analysis indicates that efforts to educate the public about the deepfakes they can unintentionally undermine trust in real videos. The authors point out that their findings and future research could help mitigate the harmful effects of these ultra-manipulated videos.

Some tweets suggested that after seeing the ‘deepfakes’ of the Russia-Ukraine war they no longer trusted any recordings of the invasion

Much of the previous research on deepfakes had focused on the possible future harms of the technology. “However, we have focused on the way they are already affecting social networks, as we have seen during the Russian invasion of Ukraine,” say the authors.

The study highlights how these types of videos are undermining faith in real media and are being used to demonstrate conspiracy theories

With this work “we wanted to see what misinformation about deepfakes in practice and how it is already affecting spaces on line. We found that it is difficult for people to find healthy skepticism, to be aware that these ultra-fake videos exist, but not to accuse real information of being deepfakes without reliable evidence,” concludes John Twomey.

Reference:

John Twomey et al.“Do Deepfake Videos Undermine our Epistemic Trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine”. PLOS ONE (October, 2023).

Rights: Creative Commons.


#Deepfakes #weapon #disinformation #propaganda #times #war

You may also like

Leave a Comment