In February, Google's Gemini-powered AI image generator made headlines for generating images of racially diverse Nazi-era German soldiers, prompting a public apology from the tech giant.
Despite Google's promises to improve their AI image generation tool, the company initially failed to implement effective guardrails before shutting it down entirely due to inaccuracies.
Dave Citron, senior director for Gemini Experiences, announced that with the new Imagen 3 model, Google has enhanced creative image generation capabilities with built-in safeguards for safety.
Google DeepMind researchers implemented a multi-stage filtering process to ensure quality and safety standards, carefully vetting images to eliminate any unsafe or biased outputs.
Collection
[
|
...
]