AIPasta: How AI-Powered Disinformation is Evolving Beyond CopyPasta
A new threat is emerging in the fight against online misinformation: AIPasta, a technique that leverages artificial intelligence to create and disseminate subtly varied versions of false narratives, potentially making them more persuasive and harder to detect.
The proliferation of disinformation is a growing concern, and researchers are now warning that the combination of generative AI and established tactics like CopyPasta – the repetitive posting of identical text – could significantly amplify the spread of false information. This new approach, dubbed AIPasta, presents unique challenges for social media platforms and the public alike.
The Rise of AIPasta: A More Sophisticated Approach
Traditional CopyPasta campaigns rely on the “repetitive truth” effect, where repeated exposure to a statement, even a false one, can increase its perceived credibility. However, AIPasta takes this concept a step further. As explained in a recent study published in PNAS Nexus, AI can be used to generate numerous slightly different versions of the same core message.
“This allows for the illusion of widespread support for a claim, as it appears to originate from many different individuals, rather than a single source,” explains the research, led by Saloni Dash and colleagues. The study specifically examined the use of both CopyPasta and AIPasta to spread conspiracy theories surrounding the 2020 presidential election and the origins of the COVID-19 pandemic.
Study Findings: Limited Impact, But Concerning Trends
Researchers conducted an online survey of 1,200 Americans recruited through Prolific in July 2025. The results indicated that neither CopyPasta nor AIPasta were successful in convincing participants to believe the specific conspiracy theories presented. However, a closer look at the data revealed a more nuanced picture.
Among Republican participants – those potentially predisposed to believe the studied conspiracies – exposure to AIPasta did demonstrably increase belief in the false claims compared to exposure to CopyPasta. More significantly, across the entire participant pool, regardless of political affiliation, exposure to AIPasta – but not CopyPasta – increased the perception that a broad consensus existed around the validity of the claims.
This suggests that AIPasta’s strength lies not necessarily in converting skeptics, but in creating the impression of widespread agreement, potentially influencing those who are undecided or less informed.
The Difficulty of Detection
A particularly alarming finding of the study is that the AIPasta generated was undetectable by current AI-text detectors. This poses a significant challenge for social media platforms, as it will likely be far more difficult to identify and remove AIPasta-driven disinformation campaigns compared to traditional CopyPasta, which relies on identical text and is therefore easier to flag.
“The inability of current detection tools to identify AIPasta suggests it will be more effective than CopyPasta in spreading disinformation,” the researchers noted. This increased effectiveness stems from its ability to evade detection and maintain a veneer of authenticity.
Implications for the Future of Online Information
The emergence of AIPasta underscores the evolving sophistication of online disinformation tactics. As generative AI becomes more accessible and powerful, the potential for malicious actors to exploit these technologies will only increase.
The study highlights the urgent need for continued research into detection methods and strategies to counter the persuasive potential of AI-paraphrased information at scale. It also emphasizes the importance of media literacy and critical thinking skills for individuals navigating the increasingly complex information landscape.
Further information about the study can be found in the full report: Saloni Dash et al, The persuasive potential of AI-paraphrased information at scale, PNAS Nexus (2025).
