The bias that is holding AI back
Briefly

The bias that is holding AI back
"Artificial intelligence is trained on data. It will process billions of words of human text, countless images, and the inane, ridiculous questions of its human users. It will learn to write in the active voice most of the time, and to keep sentences under 200 characters. It will learn that dogs have four legs and the Sun is normally yellow. And it might learn that Lorraine Woodward of Ontario wants to know how to prevent the buildup of ear wax."
"Most of what we feed into AI has been made by a human - human art, human text, human prompts. And so, it's clear that AI will inherit the biases and prejudices of human intelligence. For example, a lot has been written about how "racist" and "sexist" AI is. "Draw a picture of a doctor," we might prompt. AI whirrs through its stock catalogue, where 80% of its doctor images are white, male, and gray-haired. It creates the most likely image of a "doctor""
Artificial intelligence learns from vast amounts of human-produced data, including text, images, and user prompts, and thereby absorbs common patterns, norms, and errors. Training datasets frequently reflect social biases and stereotypes, causing AI outputs to reproduce racial and gender imbalances. Image and text examples reveal how majority representations dominate AI-generated content, such as stereotypical depictions of professionals. Human exceptionalism and anthropocentrism shape scientific questions, methods, and conclusions, embedding values into supposedly value-free practices. These embedded values produce ethical risks and potential social harms when biased AI systems influence decisions and cultural representations.
Read at Big Think
Unable to calculate read time
[
|
]