Investigations into OpenAI's Sora model have revealed severe biases in generative AI, where portrayals reinforce racist, sexist, and ableist stereotypes. Notably, gender representation in occupational prompts showed a lack of diversity; for instance, prompts for a pilot produced only male figures, while flight attendants were solely female. This not only perpetuates harmful societal norms but raises concerns among experts about the potential real-world impact on marginalized groups. The alarming nature of these biases underlines the necessity for stringent ethical oversight in AI development.
The biased depictions in AI videos will amplify the stereotyping of marginalized groups - if they don't omit their existence entirely.
Sora didn't generate a single video showing a woman when prompted with 'a pilot', highlighting blatant sexism in AI outputs.
Collection
[
|
...
]