New Grok AI model surprises experts by checking Elon Musk's views before answering
Briefly

Grok 4’s operational design revolves around a system prompt that influences its personality and response behavior by integrating user prompts, past interactions, and company directives. The prompt encourages diverse sourcing for controversial topics and permits politically incorrect claims if substantiated. Despite no direct indication to reference Elon Musk in its framework, Grok's reasoning may infer connections due to Musk's ownership of xAI, leading it to consider his opinions relevant when forming responses. The lack of clarity from xAI leaves interpretations largely speculative.
Every AI chatbot processes an input called a "prompt" and produces a plausible output based on that prompt, which is the core function of every LLM.
The system prompt partially defines the "personality" and behavior of the chatbot, incorporating user comments, chat history, and company instructions.
Grok 4 reportedly shares its system prompt which states Grok should "search for a distribution of sources that represents all parties/stakeholders" for controversial queries.
Willison proposes that Grok's behavior stems from a chain of inferences rather than an explicit instruction to check Musk's opinions.
Read at Ars Technica
[
|
]