Artificial intelligence learns to lie

by time news

2023-07-10 14:45:52

Research reveals that AI systems specialized in linguistic tasks such as ChatGPT (Chat Generative Pre-trained Transformer) can generate fraudulent academic articles that appear to reflect genuine research when in fact they are hoaxes. This discovery raises fears about the use of artificial intelligence by unscrupulous people to mislead the scientific community with false research.

The team led by Martin Májovský, from Charles University (Carolina University) in Prague, Czech Republic, set out to investigate the capabilities that current artificial intelligence systems specialized in human language have to create academic articles exposing research as real and in full detail. medical procedures that have never been carried out. The team used the popular artificial intelligence chatbot ChatGPT (GPT-3) developed by OpenAI, to generate an academic paper exposing as authentic all the details of a neurosurgery research that has ever been done. As ChatGPT generated answers, the questions were refined as well as the new answers, which made it possible to improve the quality of the result step by step (that is, to make the lie more and more credible).

The results of this study have been disturbing: the artificial intelligence system successfully produced a fraudulent scholarly paper that could pass for authentic in terms of word usage, sentence structure, and overall composition. The article included standard sections such as an abstract, introduction, research methods, results, and discussion, as well as tables and other data. Surprisingly, the entire process of creating the article took only an hour, and the human user did not even need any special training.

Illustration generated by another artificial intelligence, following the instructions of a human who requested that the image show a doctor next to the open Pandora’s box and that it appear to have been painted in oil by the famous painter Henri Matisse. (Image generated by: DALL-E2 / OpenAI, on March 9, 2023. Person who gave the instructions: Martin Májovský)

As artificial intelligence continues to advance, it is crucial for the scientific community not only to verify the accuracy of the content generated by these tools, but also to put in place mechanisms to detect and prevent deliberate fraud.

The study has been published in the Journal of Medical Internet Research. The reference is as follows: Májovský M, Černý M, Kasal M, Komarc M, Netuka D Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened J Med Internet Res 2023;25:e46924. DOI: 10.2196/46924. (Source: NCYT from Amazings)

#Artificial #intelligence #learns #lie

You may also like

Leave a Comment