ChatGPT Replicates Gender Bias in Recommendation Letters
Briefly

The researchers asked two large language model (LLM) chatbotsChatGPT and Alpaca, a model developed by Stanford Universityto produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.
The new study results are not super surprising to me, says Alex Hanna, director of research at Google and a sociologist at the University of Toronto, who was not involved in the study. Sociologists who study gender are aware of these issues with previous AI systems...
Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.
Read at www.scientificamerican.com
[
|
]