The Challenge of Distinguishing Real from AI-Generated Images

by time news

2024-03-11 06:01:58

A University of Waterloo study found that people struggle to differentiate between real and AI-generated images, with only 61% accuracy, raising concerns about the reliability of visual information and the need for AI-generated content recognition tools. Credit: twoday.co.il.com

Research shows that survey participants were misled by AI-generated images nearly 40 percent of the time.

If you’ve recently had trouble figuring out whether a photo of a person is real or created using artificial intelligence (AI), you’re not alone.

A new study from the University of Waterloo researchers found that people had more difficulty than expected distinguishing who is a real person and who is artificially created.

The Waterloo study saw 260 participants provided with 20 unlabeled images: 10 of which were of real people obtained from Google searches, and the other 10 were generated by Stable Diffusion or DALL-E, two common AI programs that generate images.

Participants were asked to label each image as real or AI-generated and explain why they made their decision. Only 61 percent of participants could tell the difference between AI-generated people and real people, well below the 85 percent threshold the researchers expected.

Three of the AI-generated images used in the study. Credit: University of Waterloo

Deceptive indicators and rapid AI development

“People are not as skilled at making the distinction as they think they are,” said Andrea Fokol, a computer science doctoral candidate at the University of Waterloo and lead author of the study.

Participants noticed details like fingers, teeth and eyes as possible indicators when looking for AI-generated content – ​​but their estimates weren’t always correct.

Pocol noted that the nature of the study allowed participants to examine the images at length, whereas most Internet users look at images in passing.

“People who are just scrolling or don’t have time won’t pick up on these cues,” Pokol said.

Pocol added that the extremely rapid pace at which AI technology is developing makes it particularly difficult to understand the potential for malicious or malicious action arising from AI-generated images. The pace of academic research and legislation often fails to keep up: AI-generated images have become even more realistic since research began in late 2022.

The threat of disinformation created by artificial intelligence

These AI-generated images are particularly threatening as a political and cultural tool, which could see any user creating fake images of public figures in embarrassing or compromising situations.

“Disinformation is not new, but the tools of disinformation have constantly changed and evolved,” said Fokol. “It could get to a point where people, no matter how trained they are, will still struggle to distinguish real images from fakes. So we need to develop tools to identify and deal with it. It’s like a new AI arms race.”

The study, “Seeing Is No Longer Believing: A Survey of the State of Deepfakes, AI-Generated Humans, and Other Unreal Media,” was published in the journal Advances in computer graphics.

#Exposing #illusion #AIgenerated #faces #challenge #perceptions

You may also like

Leave a Comment