ChatGPT is capable of deceiving reviewers of scientific articles

by time news
Image by rawpixel.com on Freepik

ChatGPT is an advanced artificial intelligence system accessible for free. A group of researchers have provided you with fifty titles of scientific articles and have asked you to prepare abstracts for these articles. Are these “synthetic” summaries distinguishable from the originals? The answer is yes… In part.

Let’s start with this question:

Is it possible to distinguish abstracts scientists generated from the title of an article by artificial intelligence of the abstracts originals written by the authors of the article?

It may be difficult to distinguish between scientific abstracts generated by artificial intelligence and those written by researchers based only on the title of the article. However, there are some characteristics that may indicate that an abstract has been generated by AI. For example, AI-generated abstracts may be shorter and less detailed than those written by human researchers. They may also contain less study-specific information or less detail about the results. It is important to note that as language generation technology advances, it is increasingly difficult to distinguish between abstracts generated by AI and those written by human researchers.

I don’t know if you will agree with this answer. But the striking thing about it is that it has not been written by me, nor by any other human being. This answer has been written by ChatGPT, a conversational artificial intelligence (AI) system developed by OpenAI, the company founded in San Francisco in 2015 by a group of entrepreneurs including (of course) Elon Musk. In 2020 OpenAI developed an API (application programming interface) that the following year was open to the public in a restricted way. This API facilitated the interaction between users and data libraries. On November 30, OpenAI launched ChatGPT, a chatbot, that is, a conversational interface to which you can ask all kinds of questions, problems or tasks and it will respond in natural language. The result has been spectacular. In the first five days ChatGPT already had one million users. OpenAI, despite being a non-profit company, is receiving huge amounts of money from large investors, including Microsoft. This is not surprising, since at the moment its use is free, but soon it will be necessary to pay for its services. And since this is going faster and faster, the upcoming release of GPT4.0 has already been announced, the new version that will be able to write texts of all kinds, write summaries and answer complex questions with much greater precision (it is said that it will be 500 times more powerful than ChatGPT).

The appearance of ChatGPT has aroused many reactions, and the very important repercussions in the world of education are being pointed out these days. ChatGPT is already being used to do assignments and write texts without the slightest effort on the part of students. The New York City Department of Education has just banned access to ChatGPT on school networks. But here we are going to refer to the impact in the field of scientific publication.

An article that has just appeared in pre-publication gives an idea of ​​the power of this new system and the problems it can pose. A group of medical researchers from Chicago and Northwestern Universities selected ten abstracts (abstracts) from five high-impact scientific journals, a total of fifty scientific articles. The fifty article titles and journal names were supplied to ChatGPT and the chatbot to prepare the summaries based on such scant information. The fifty “synthetic” abstracts and the original fifty were subjected to an AI-generated text detector (of course, based on AI), a plagiarism detector, and human reviewers (co-authors of the article) who were unaware of the origin of the abstracts.

The result was as follows: all of the ChatGPT abstracts were clearly written, although few conformed to the strict journal format. Most were identified by the AI ​​detector, which predictably failed to recognize the original abstracts (except for one). Of course, five “synthetic” summaries were accepted by the detector as written by humans. Also, all the summaries written by ChatGPT passed the plagiarism detector with no problems. What is most interesting is that when the expert reviewers were provided with a mix of abstracts of both types, only 68% of the artificial ones were detected by the human reviewers, who accepted the remaining 32%. Interestingly, 14% of the original abstracts were incorrectly identified as a product of ChatGPT by the reviewers, who noted the difficulty in distinguishing between the two sets of abstracts. However, they also affirmed that there were certain vagueness in the wording of those prepared by ChatGPT, which allowed a correct identification in two out of three cases.

If ChatGPT was able to fool the reviewers using so little input information, what unusual situations and potential ethical conflicts will GPT4.0 bring us? We’d better get ready. At the moment the authors of the article propose that scientific journals incorporate text detectors developed by artificial intelligence, along with plagiarism detectors. Also that it is explicitly declared if the text was created using ChatGPT. Who has gone further? ChatGPT has already become the first non-human co-author of a scientific article!

This article is sent to us Ramon Muñoz-Chapuli (Granada, 1956) has been a professor of Animal Biology at the University of Malaga until his recent retirement. He has published a hundred scientific articles on Animal Development and Evolution Biology in national and international journals, as well as numerous informative articles. His teaching has focused mainly on these topics, although he has also taught History of Biology and Philosophy of Science classes at the postgraduate level. He has been Vice Dean of the Faculty of Sciences and Director of the Doctoral School of the UMA. He is the author of several award-winning stories in literary competitions and two novels The dream of the Antichrist y Zugwang.

You may also like

Leave a Comment