The popularity of AI engines creates challenges in identifying “fake news”

by time news

Former US President Donald Trump is mobbed and handcuffed by police officers on the streets of New York. Vladimir Putin, the Russian president, kneels before Chinese President Xi Jinping. And Pope Francis has been spotted walking around in a trendy coat. None of these events happened recently. , although there is a “filmed documentation” of them circulating on the net. The explanation for this is the increasing use by fake distributors of various types of the graphic capabilities of artificial intelligence (AI) programs such as DALL-E or Midjourney.

The surfer’s guide: how to identify images created by AI

So how do you know that an image you came across was produced by software like DALL-E or Midjourney?

A guide that appeared on the BBC website mentions two specific elements that, at least today, may reveal the source of the image quite quickly: the fingers and the eyes. A look at the number of fingers of the character seen in the artificially created image can reveal quite quickly that they have an extra finger or rather one missing finger. And what about the eyes? Note that they are not always directed in the direction you would expect. Thus, for example, in one of the photos in which Trump is seen fleeing from policemen who are chasing him, the policemen’s gaze is not focused on the pursuit. Even a blurred face, or on the other hand hyper-realistic, can betray that something is not right here.

Alex Makhtavan from Pointer offers some more general tips to deal with a suspicious image. “When you see a picture on the Internet, first of all ask yourself who is behind it? Was it distributed by a journalist or a reliable source? Secondly, is there a reference to its source next to the picture or some context that would allow it to be examined? And thirdly, you can also check what other sources say about this picture”.

And what about the fact checkers themselves? Machtavan believes that now they must “get to know in depth how these programs work. The better we know them, the better we can understand what is needed to identify content created by them. And we also need to prepare for major news events. Let’s say before the indictment against Trump we could have prepared for the ones that would be A lot of fakes that will go around.”

For those who have not yet been exposed to them, these are programs that allow the user to write in simple language what image he wishes to produce, and receive within seconds different options that meet this request. How successful are these abilities? To a large extent, they may already challenge a significant part of the surfers, who in any case do not adhere to a critical attitude towards the multitude of information with which we are bombarded. But in the future, could such programs, or more advanced versions of them, turn the identification of fake news into a real challenge that only experts can decide on?

“The term deepfake (translated into Hebrew ‘deep fake’) is the bread of the concepts of deep learning (deep learning) and fake (fake)”, explained in a study conducted by Dr. Liran Entavi, a senior researcher at the Institute for National Security Studies (INSS). It describes a technological application based on artificial intelligence, which makes it possible to change or process the content of photos or videos so that it is difficult and sometimes even impossible to notice that it is a fake.”

This technology, the study explains, has positive uses, but it can also be misused, for criminal purposes (for example, impersonating the owner of a bank account), and of course for political purposes. “This poses real challenges to the truth and democracy,” writes Antavi.

Show more for all articles

The Russian video didn’t work

And yet, in the conversation we had with her, she explains that there is no reason to panic either. According to her, although the creation of deep fakes has become available and very cheap in recent years, in order to produce convincing fakes – ones that have a real potential to affect the security of a country – significant resources will still be needed.

“In order to produce a high-quality pike, you need very large processing capabilities, and usually for them you need parties with means.” For example, she says, if you want to produce a video, then it is desirable that the person doing the initial imitation, on whom the fake is put on, be as similar as possible to the person you want to fake. “You also need an extensive database of photos or videos of the character being faked, and all these things require resources.”

And that’s not enough either. Dr. Antavi adds that the pike also needs what is known as a “supportive environment”: “The Russians made a video of Zelensky calling on the Ukrainians to lay down their weapons. Why didn’t it work? Because the situation was not believable. The effect of a pike is greater when there is no time to start testing.”

So producing a fake that will deceive even established institutional bodies and means of communication over time will probably remain a difficult task even in the near future. But what about a lower grade pike? One that skeptical surfers and fact-checkers can explain that it is not true, but in the meantime it will be enough to cover half the world like the cliché article? After all, even today, without the use of special technology, we repeatedly come across unfounded news of various types and degrees of reliability that are widely distributed among quite a few surfers, and influence the public discourse in one way or another.

“I think we are already overwhelmed,” Agam Rafaeli Farhadian, chairman of the executive committee of the public knowledge workshop, tells us about this. “Even before AI, there were staged images. What the AI ​​does is make the ability to produce fake content accessible to more people. But the new technology is not the problem, and the existing problem is not new.”

That is?
“The boundary here is not between black and white, but between areas of gray. Is a photo that was indeed taken but is staged real? When I arrange my hair before the photo, do I make the photo fake? The question is whether the photo does indeed document reality, that’s why They say she is worth a thousand words.”

This is the current situation. What might happen in the future? Could it be that soon we will no longer be able to distinguish between a real image and an image created by AI? Thus, for example, warned experts quoted in an article devoted to the subject on the BBC website (see box). What will happen then and what can be done? “We have to learn not to believe every picture we see and we have to be skeptical,” says Rafael Farhadian, “but it can be tiring and require a lot of time. I don’t think it’s going to be easy.”

The answer: counter technology

“The (state) systems also need to be prepared to deal with it,” says Dr. Antabi. “Today if you go to the police and say that a deep fake was distributed about you that hurt you, it is not certain that they would know what to do with it.” In addition, she says, “many times the answer For technology is counter-technology”, for example, software that will allow us to enter a photo or video into it and receive an indication if they are authentic.

Alex Machtavan, director of MediaWise, a body of the Poynter Institute (the institute that founded the international fact-checking network), mentions AI as a technology that can also work for the benefit of fact-checkers. or monitoring problematic users. I also know that in the US journalists use Chat GPT to submit freedom of information requests. There is no doubt that AI can help fact-checking become more efficient.”

You may also like

Leave a Comment