Peer review is unable to identify articles generated with ChatGPT

by time news

2023-09-12 13:57:57

Peer review is a guarantee of the quality of scientific publications. The use of artificial intelligence tools, such as ChatGPT, by authors poses a challenge for peer review. Should a reviewer question whether an article was generated, in part, with ChatGPT? Should a reviewer reject an article that contains “Regenerate response» as a paragraph? An article in the magazine Written Physics (Q2 in JCR 2023) received on May 31 and accepted on July 27 includes said paragraph without the scientific reviewers, nor the editor, nor the editorial layout designer realizing it. It was noticed by Guillaume Cabanac, who published it in PubPeer; Gemma Conroy echoed in the magazine Nature. The editor of Written Physics will withdraw (retract) to the article because the authors have confessed that they used ChatGPT without saying anything. If they had said something in the acknowledgments, would their article be acceptable? Would the reviewers have been attentive and detected the mistake? Should publishers detect ChatGPT and reviewers limit themselves to scientific content? With the arrival of ChatGPT, scientific publications have changed forever and ever.

The article in question presents the solutions in Jacobi elliptic functions of a partial differential equation (called the Hamiltonian amplitude equation, HEM). For this, a trivial method is used (∗), using a series expansion of Jacobi elliptic functions (JEFEM). ChatGPT is unable to implement such a method, despite its triviality. It has certainly been used to write the text in English, to review the entire article paragraph by paragraph. In the cut and paste process the authors included the ChatGPT trademark “Regenerate response”. An unimportant gaffe, but it reveals that they have used this tool. Many journals require authors to explicitly indicate whether or not they have used these types of tools in the Acknowledgments section. But many authors are ashamed to confess in this way that their level of English is poor; For this reason, they omit the gratitude that today is considered obligatory (and that will be very popular in the coming years). Scientific article sleuths like Cabanac have detected typical ChatGPT phrases in many articles published in magazines from many publishers. Techniques for detecting the subtle traces left by these artificial intelligences are advancing more every day. Just like plagiarism detection software, all publishers will end up using these software to analyze articles before peer review. Maybe then reviewers will only have to worry about the scientific content.

El artículo en liza es Sibel Tarla, Karmina K. Ali, Abdullahi Yusuf, «Exploring new optical solutions for nonlinear Hamiltonian amplitude equation via two integration schemes,» Physica Scripta 98: 095218 (09 Aug 2023), doi: https://doi.org/10.1088/1402-4896/aceb40. El artículo periodístico del que me he echo eco es Gemma Conroy, «Scientific sleuths spot dishonest ChatGPT use in papers. Manuscripts that don’t disclose AI assistance are slipping past peer reviewers,» News, Nature, 08 Sep 2023, doi: https://doi.org/10.1038/d41586-023-02477-w.

(∗) In this context, when I use the word trivial, I mean that any student of mathematics or physics degrees can apply this method using symbolic software like Mathematica without even having studied the theory of Jacobi elliptic functions. Trivial means machinable.

#Peer #review #unable #identify #articles #generated #ChatGPT

You may also like

Leave a Comment