The LAION research database, integral to AI image generation, has removed over 2,000 links to suspected child sexual abuse imagery, addressing previous concerns raised by experts.
Researchers noted that the database, used by tools like Stable Diffusion, contributed to the accidental production of deepfakes of children, prompting a necessary cleanup effort.
Following a December report from Stanford, LAION collaborated with anti-abuse organizations to ensure a safer database for the ethical use of AI technology in research.
Experts emphasize that while significant progress has been made, the next critical step is withdrawing older AI models capable of producing harmful content, to protect children.
Collection
[
|
...
]