AI can fake entire scientific papers. What does this mean for scholarly publishing?

This student story was published as part of the 2023 NASW Perlman Virtual Mentoring Program organized by the NASW Education Committee, providing science journalism practice and experience for undergraduate and graduate students.

Story by Jenna Jakubisin
Mentored and edited by Shaun Kirby

After the launch of ChatGPT-3 in November 2022, neurosurgeon Dr. Martin Májovský was excited by the hype surrounding language models—and eager to test the tool with his colleagues at Charles University in Prague, Czech Republic.

Generative Artificial Intelligence software like OpenAI’s ChatGPT is powered by a massive dataset of internet text, producing user-prompted writing that can be hard to distinguish from a human’s. The potential for research misconduct—plagiarizing, faking or misrepresenting data—has sparked concern in the scholarly publishing community, and gave Májovský an idea.

“We said, ‘okay, let’s try to generate [a] completely fake article by ChatGPT’,” Májovský says.

Research misconduct is not a new problem, but generative AI makes it easier than ever to fabricate research, raising questions about ethics, accuracy and accessibility. While AI like ChatGPT can help authors brainstorm research ideas, format citations and analyze data, it can also be biased or wrong (ChatGPT-3 was only trained on data through September 2021). In their study, Májovský and colleagues used ChatGPT-3 to fake a whole scientific manuscript—quickly and convincingly—in the field of neurosurgery.

Prompting ChatGPT-3 to create a manuscript in the style of PLOS Medicine, a high-impact Open Access journal, the researchers received a title: ‘Effectiveness of Deep Brain Stimulation for Treatment-resistant Depression: A Randomized Controlled Trial’. Next, they requested an abstract. An hour later, without any special training, Májovský et al. had a fake manuscript in the IMRAD format, including references and a datasheet.

The paper was analyzed with two detection tools. AI Detector rated the probability of AI-generated text as 48%, while AI Text Classifier classified the paper as “unclear.”

Experts in neurosurgery, psychiatry, and statistics weighed in next. The manuscript was technically accurate but much shorter (only 1,992 words) than real articles. The experts flagged fabricated references and the absence of standard elements, like a ClinicalTrials.gov registration number (a database of human medical studies) and coverage of adverse events. Finally, the authors used ChatGPT to analyze the fake article like a peer reviewer. The AI-generated review identified strengths, weaknesses and potential revisions.

“As AI language models continue to advance… it will become increasingly important to develop ethical guidelines and best practices for their use in scientific writing and research,” the authors concluded.

Pedro Ballester, who authored a commentary on the study, is a computer scientist at The Hospital for Sick Children in Toronto, Ontario, Canada. He and Májovský believe academia’s “publish or perish” culture—in which opportunities for career advancement, tenure, and funding are often tied to publications—may motivate research fabrication. Roughly three million scientific papers were published in 2018, and an estimated 24% of medical papers published in 2020 were made up or plagiarized.

Another layer, says Ballester, is the race among researchers to publish first and avoid being scooped. “There’s a good chance [the peer reviewer] might be your competitor,” he explains.

But both researchers feel that banning generative AI is not the answer to preventing misconduct.

For example, generative AI could improve research accessibility and equity between English-speaking researchers and non-native speakers. A 2020 study in PLOS ONE found that 98% of scientific publications are in English. While a shared language is key for science communication, it may create an unfair advantage in favor of native speakers. And language editing services are expensive for international authors.

Jennifer Regala, Director of Publications and Executive Editor at the American Urological Association, says that using generative AI for “language perfection” can level the playing field, but there are no one-size-fits-all-answers.

“I think it’s hard to come up with any hard and fast rules, because things are changing so quickly,” she explains.

As generative AI tools evolve, so will the software and strategies used to identify them. To foster transparency, Ballester and Májovský want anonymized data sets made public so that research studies can be replicated. They also see a need for changes in peer review—a system in crisis—such as more diversity in reviewer backgrounds, better statistical training, and open science (eg, code sharing, open peer review). And for manuscript submission, author instructions and editorial policies should be clearly stated on journal websites.

Regala says that publishers need to stay relevant and foster community engagement. “We work really hard to talk to our authors, our reviewers, our editors, and our peer reviewers to understand what they’re thinking, and also explain what we’re doing, too.

“Artificial intelligence is a disaster waiting to happen, but also it’s a solution to so many problems.”

Jenna Jakubisin is pursuing a master’s degree in Science Writing at Johns Hopkins University. She is the Managing Editor of the journal Radiology and a member of the Council of Science Editors. Connect with her on LinkedIn, on Twitter @heyjennajay, or via email at jjakubi1@jhu.edu.

Headshot of Jenna Jakubisin

Jenna Jakubisin

Headshot of Shaun Kirby

Shaun Kirby


The NASW Perlman Virtual Mentoring program is named for longtime science writer and past NASW President David Perlman. Dave, who died in 2020 at the age of 101 only three years after his retirement from the San Francisco Chronicle, was a mentor to countless members of the science writing community and always made time for kind and supportive words, especially for early career writers. Contact NASW Education Committee Co-Chairs Czerne Reid and Ashley Yeager and Perlman Program Coordinator Courtney Gorman at mentor@nasw.org. Thank you to the many NASW member volunteers who spearhead our #SciWriStudent programming year after year.

Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~ 2,600 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org