Internal documents show that xAI's Grok chatbot is being trained to reflect values aligned with conservative ideologies. Prompts used in training include provocative questions about gender identity and race, suggesting a strategy aimed at countering what is termed as 'woke' culture. Data annotators are instructed to recognize and filter out perceived biases, particularly those reflecting social justice themes. Workers have expressed concerns that this approach favors right-wing perspectives and discourages discussions on sensitive topics like racism and activism unless specifically asked.
The training process for Grok intends to filter out workers with left-leaning beliefs, implying a preference for right-wing ideologies in the chatbot's development.
Certain topics like racism and activism are discouraged unless specifically prompted, reflecting a strategy to avoid discussions deemed sensitive or politically charged.
Internal documents indicate Grok's training includes controversial queries, suggesting a testing ground for responses that align with Elon Musk's view against "woke" ideologies.
"Wokeness has become a breeding ground for bias," indicating a belief that societal awareness and activism can cloud objectivity in responses.
Collection
[
|
...
]