Issue #93
by Michael Seadle (Humboldt-Universität zu Berlin)
Retraction watch recently published a reference to a Nature article by Diana Kwon called; ““AI-Generated Images Threaten Science — Here’s How Researchers Hope to Spot Them.”¹ The quality of AI generated images has improved over recent years.
“Already, an arms race is emerging as integrity specialists, publishers and technology companies race to develop AI tools that can assist in rapidly detecting deceptive, AI-generated elements of papers. … Pinpointing AI-produced images poses a huge challenge: they are often almost impossible to distinguish from real ones, at least with the naked eye.”¹
This could well mean that some papers are slipping through, but new AI-based tools may help to catch them. Some companies have already started to develop tools:
“The makers behind tools such as Imagetwin and Proofig, which use AI to detect integrity issues in scientific figures, are expanding their software to weed out images created by generative AI.”¹
Whether these new tools are reliable is of course a serious question:
“‘I have great hopes for these tools,’ [Jana] Christopher says. But she notes that their outputs will always need to be assessed by an expert who can verify the issues they flag….These tools are ‘limited, but certainly very useful, as it means we can scale up our effort of screening submissions,’ she adds.”¹
Publishers themselves are interested in addressing the problem, because it affects their reputations, but a serious question is how capably and how fast the publishing industry will be able address the problem. It seems unlikely that even major publishers have the in-house AI expertise to build effective detection tools, or they could have done so already.
Committees working on the problem, including “United2Act and the STM Integrity Hub”¹ The focus of both of these projects appears to be paper mills, which are definitely a concern for the scholarly community, and represent a problem that legitimate publishers have a commercial interest in addressing. Kevin Patrick says:
“Fraudsters shouldn’t sleep well at night. They could fool today’s process, but I don’t think they’ll be able to fool the process forever.”¹
Forever may be longer than scholars want to wait. Perhaps the iSchools should set up a research group to address the problem.
1: Kwon, Diana. “AI-Generated Images Threaten Science — Here’s How Researchers Hope to Spot Them.” Nature, November 5, 2024. https://doi.org/10.1038/d41586-024-03542-8.
留言