AI-generated images threaten science - here's how researchers hope to spot them
Briefly

"Generative AI is evolving very fast," says Jana Christopher, an image-integrity analyst at FEBS Press in Heidelberg, Germany. "The people that work in my field - image integrity and publication ethics - are getting increasingly worried about the possibilities that it offers." This highlights the rapid advancement of AI technology and the subsequent anxiety among professionals regarding its impact on scientific integrity.
"It's a scary development," Christopher says. "But there are also clever people and good structural changes that are being suggested." This acknowledges the dual nature of AI's progression, as professionals strive for innovative solutions while grappling with the challenges it poses to research ethics.
"In the near future, we may be okay with AI-generated text," says Elisabeth Bik, an image-forensics specialist and consultant. "But I draw the line at generating data." This reflects an emerging consensus about boundaries regarding the acceptable use of AI in research, emphasizing concern over the integrity of data.
Bik, Christopher and others suspect that data, including images, fabricated using generative AI are already widespread in the literature, and that paper mills are taking advantage of AI tools to produce manuscripts en masse." This highlights the existing problem of AI-generated content infiltrating scientific literature, presenting a significant challenge for maintaining research integrity.
Read at Nature
[
|
]