Science journal retracts peer-reviewed article containing AI generated ‘nonsensical’ images


An open access scientific journal, Frontiers in Cell and Developmental Biology, was openly criticized and mocked by researchers on social media this week after they observed the publication had recently put up an article including imagery with gibberish descriptions and diagrams of anatomically incorrect mammalian testicles and sperm cells, which bore signs of being created by an AI image generator.

The publication has since responded to one of its critics on the social network X, posting from its verified account: “We thank the readers for their scrutiny of our articles: when we get it wrong, the crowdsourcing dynamic of open science means that community feedback helps us to quickly correct the record.” It has also removed the article, entitled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway” from its website and issued a retraction notice, stating:

“Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted.

This retraction was approved by the Chief Executive Editor of Frontiers. Frontiers would like to thank the concerned readers who contacted us regarding the published article.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 

Request an invite

Misspelled words and anatomically incorrect illustrations

However, VentureBeat has obtained a copy and republished the original article below in the interest of maintaining the public record of it.

As you can observe in it, it contains several graphics and illustrations rendered in a seemingly clear and colorful scientific style, but zooming in, there are many misspelled words and misshapen letters, such as “protemns” instead of “proteins,” for example, and a word spelled “zxpens.”

Screenshot 2024 02 16 at 9.07.47%E2%80%AFAM
Screenshot 2024 02 16 at 9.08.00%E2%80%AFAM

Perhaps most problematic is the image of “rat” (spelled correctly) which appears first in the paper, and shows a massive growth in its groin region.

Screenshot 2024 02 16 at 9.07.41%E2%80%AFAM 1

Blasted on X

Shortly after the paper’s publication on February 13, 2024, researchers took to X to call it out and question how it made it through peer review.

The paper is authored by Xinyu Guo and Dingjun Hao of the Department of Spine Surgery, Hong Hui Hospital at Xi’an Jiaotong University; as well as Liang Dong of the Department of Spine Surgery, Xi’an Honghui Hospital in Xi’an, China.

It was reviewed by Binsila B. Krishnan of the National Institute of Animal Nutrition and Physiology (ICAR) in India and Jingbo Dai of Northwestern Medicine in the United States, and edited by Arumugam Kumaresan at the National Dairy Research Institute (ICAR) in India.

VentureBeat reached out to all the authors and editors of the paper, as well as Amanda Gay Fisher, the journal’s Field Chief Editor, and a professor of biochemistry at the prestigious Oxford University in the UK, to ask further questions about how the article was published, and will update when we hear back.

Troubling wider implications for AI’s impact on science, research, and medicine

AI has been touted as a valuable tool for advancing scientific research and discovery by some of its makers, including Google with its AlphaFold protein structure predictor and materials science AI GNoME, recently covered positively by the press (including VentureBeat) for discovering 2 million new materials.

However, those tools are focused on the research side. When it comes to publishing that research, it is clear that AI image generators could pose a major threat to scientific accuracy, especially if researchers are using them indiscriminately, to cut corners and publish faster, or because they are malicious or simply don’t care.

The move to use AI to create scientific illustrations or diagrams is troubling because it undermines the accuracy and trust among the scientific community and wider public that the work going into important fields that impact our lives and health — such as medicine and biology — is accurate, safe, and screened.

Yet it may also be the product of the wider “publish or perish” climate that has arisen in science over the last several decades, in which researchers have attested they feel the need to rush out papers of little value in order to show they are contributing something, anything, to their field, and bolster the number of citations attributed to them by others, padding their resumes for future jobs.

But also, let’s be honest — some of these researchers on this paper work in spine surgery at a human hospital: would you trust them to operate on your spine or help with your back health?

And with more than 114,000 citations to its name, the journal Frontiers in Cell and Developmental Biology has now had its integrity of all of them called into question by this lapse: how many more papers published by it have AI-illustrated diagrams that have slipped through the review process?

Intriguingly, Frontiers in Cell and Developmental Biology is part of the wider Frontiers company of more than 230 different scientific publications founded in 2007 by neuroscientists Kamila Markram and Henry Markram , the former of whom is still listed as CEO.

The company says its “vision [is] to make science open, peer-review rigorous, transparent, and efficient and harness the power of technology to truly serve researchers’ needs,” and in fact, some of the tech it uses is AI for peer review.

As Frontiers proclaimed in a 2020 press release:

In an industry first, Artificial Intelligence (AI) is being deployed to help review research papers and assist in the peer-review process. The state-of-the-art Artificial Intelligence Review Assistant (AIRA), developed by open-access publisher Frontiers, helps editors, reviewers and authors evaluate the quality of manuscripts. AIRA reads each paper and can currently make up to 20 recommendations in just seconds, including the assessment of language quality, the integrity of the figures, the detection of plagiarism, as well as identifying potential conflicts of interest.

The company’s website notes AIRA debuted in 2018 as “The next generation of peer review in which AI and machine learning enable more rigorous quality control and efficiency in the peer review.”

And just last summer, an article and video featuring Mirjam Eckert, chief publishing officer at Frontiers, stated:

At Frontiers, we apply AI to help build that trust. Our Artificial Intelligence Review Assistant (AIRA) verifies that scientific knowledge is accurately and honestly presented even before our people decide whether to review, endorse, or publish the research paper that contains it.

AIRA reads every research manuscript we receive and makes up to 20 checks a second. These checks cover, among other things, language quality, the integrity of figures and images, plagiarism, and conflicts of interest. The results give editors and reviewers another perspective as they decide whether to put a research paper through our rigorous and transparent peer review.

Frontiers has also received favorably coverage of its AI article review assistant AIRA in such notable publications as The New York Times and Financial Times.

Clearly, the tool wasn’t able to effectively catch these nonsensical images in the article, leading to its retraction (if it was used at all in this case). But it also raises questions about the ability of such AI tools to detect, flag, and ultimately stop the publication of inaccurate scientific information — and the growing prevalence of its use at Frontiers and elsewhere across the publishing ecosystem. Perhaps that is the danger of being on the “frontier” of a new technology movement such as AI — the risk of it going wrong is higher than with the “tried and true,” human-only or analog approach.

VentureBeat also relies on AI tools for image generation and some text, but all articles are reviewed by human journalists prior to publication. AI was not used by VentureBeat in the writing, reporting, illustrating or publishing of this article.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.





Source link

About The Author

Scroll to Top