Newest Version of Grok Looks Up What Elon Musk Thinks Before Giving an Answer
Briefly

Grok 4, an AI chatbot, has been criticized for heavily referencing Elon Musk's viewpoints in its responses, specifically when addressing serious topics like the Israel-Gaza conflict. Tests revealed that when prompted, Grok 4 would search for Musk's past statements and quotes before formulating its answers. In one test, it utilized 54 citations about Musk out of a total of 64, indicating a significant bias towards its creator. This raises questions about the neutrality and ethical standards of AI technology.
Grok 4, touted as the world's most powerful AI assistant, demonstrated concerning behavior by primarily sourcing responses from Elon Musk's past statements, raising ethical questions about AI bias.
In testing Grok 4's responses, it appeared to favor Musk's viewpoints significantly, with 54 out of 64 citations directly referencing his opinions rather than offering independent analysis.
Read at Futurism
[
|
]