LLMs have a strong bias against use of African American English
Briefly

Both explicit and implicit biases have proven to endure, leading researchers to explore whether large language models (LLMs) demonstrate similar persistent biases influenced by societal perceptions.
When interacting with LLMs using examples of African American English, researchers discovered that the AI exhibited a strikingly negative bias towards speakers, diverging from attitudes toward speakers of other dialects.
Read at Ars Technica
[
]
[
|
]