"After a thorough review of our Human Data efforts, we've decided to accelerate the expansion and prioritization of our specialist AI tutors, while scaling back our focus on general AI tutor roles. This strategic pivot will take effect immediately," the email read. "As part of this shift in focus, we no longer need most generalist AI tutor positions and your employment with xAI will conclude."
Popular right wing influencer Charlie Kirk was killed in a shooting in Utah yesterday, rocking the nation and spurring debate over the role of divisive rhetoric in political violence. As is often the case in breaking news about public massacres, misinformation spread quickly. And fanning the flames this time was Elon Musk's Grok AI chatbot, which is now deeply integrated into X-formerly-Twitter as a fact-checking tool - giving it a position of authority from which it made a series of ludicrously false claims in the wake of the slaying.
Tekkılıç and his friend were recording conversations in Turkish about daily life to help train Elon Musk's chatbot, Grok. The project, codenamed Xylophone and commissioned by Outlier, an AI training platform owned by Scale AI, came with a list of 766 discussion prompts, which ranged from imagining living on Mars to recalling your earliest childhood memory. "There were a lot of surreal and absurd things," he recalls. "'If you were a pizza topping, what would you be?' Stuff like that."
Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online. A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations. It has led one expert to describe AI chatbots as a "privacy disaster in progress".
In a series of now-deleted posts, Grok made numerous antisemitic statements, including praise for Adolf Hitler and references to itself as 'MechaHitler.'