Subscribe to RSS

DOI: 10.1055/s-0045-1810418
From Plagiarism to AI-Generated Text, An Editor's Struggle to Insure Originality


Long before the discovery of artificial intelligence (AI), academia assured the originality of a research article by the use of plagiarism detectors. AI has been introduced in 2022 with different functions including creating a scientific text. This necessitates the development of detectors that can detect if a specific text is human or AI generated.
Unfortunately, most of today's available AI detectors have the disadvantages of producing a false positive result, a state that overwhelmed early plagiarism detectors. Just as early plagiarism software identified citations as plagiarism, AI detectors now mistake concise phrasing or technical terms as AI-written. Writers have to change their style to avoid algorithmic scrutiny by AI detectors, sacrificing clarity for the sake of “human-like” metrics.
As a chief editor committed to ensuring the originality of submitted articles, I have evaluated several publicly available AI detection tools. These tools aim to identify whether a piece of text has been generated by AI models. However, I found that they produced a noticeable inconsistent result when applied to the same content. As a case study, I tested the abstract of a peer-reviewed article published in 2012[1]—well before the introduction of generative AI—using four different detectors. The results were astonishing: the AI-generated probability scores ranged from 0 to 89% (see [Fig. 1]).


We all believe that original research was never about the novelty of the text, it is the product of rigorous inquiry, appropriate insight, and intellectual synthesis, qualities that no present AI detectors can measure. Plagiarism software failed to capture this logic, and AI detectors repeat the same mistake. They cannot distinguish between AI-assisted drafting and AI-generated thought, just as old plagiarism tools couldn't separate boilerplate text from conceptual theft.
I think that the solution is not by developing better detectors, but by reinforcing human judgment. Just as we once learned from plagiarism reports that it is not a verdict but a starting point for evaluation, we must approach AI results with similar logic. The tools change by the advances of technology, but the challenge remains: to promote authentic scholarship without resorting completely to machine judgment. If we learned anything from the plagiarism era, it is that originality flourishes in discussion, not surveillance. Let's not repeat history's mistakes.
No conflict of interest has been declared by the author(s).
-
Reference
- 1 Elhwuegi AS, Darez AA, Langa AM, Bashaga NA. Cross-sectional pilot study about the health status of diabetic patients in city of Misurata, Libya. Afr Health Sci 2012; 12 (01) 81-86
Address for correspondence
Publication History
Article published online:
08 August 2025
© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)
Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India
-
Reference
- 1 Elhwuegi AS, Darez AA, Langa AM, Bashaga NA. Cross-sectional pilot study about the health status of diabetic patients in city of Misurata, Libya. Afr Health Sci 2012; 12 (01) 81-86



