Join our daily and weekly newsletter for the latest updates and exclusive content on industry-leading AI coverage. learn more
If you ask the Grok AI chatbot built into Elon Musk’s social network x yesterday, it’s a harmless question, like why it’s hard to replace enterprise software – you may have already made a statement about the claims of “white genocide” in South Africa (to a large extent lacking evidence) (due to attacks on farmers and the song “Kill Boer”).
The chatbot built around the Big Word Model of the same name “Maximum Truth” (LLM) is not exactly a brand. An unexpected tangent is not a mistake, but it is not a function either.
The creator of Grok’s AI startup Xai at Elon Musk has just released an update on X (which Xai now owns) in an attempt to explain this weird, politically racist behavior, although it’s far from the actual technical details of the culprit or what happened.
Official XAI company account release:
We want to update you about the events that happened on the Grok Response Bot on X yesterday. what happened:
On May 14, at about 3:15 am PST, the prompt for Grok response was unauthorized. The change instructs Grok to provide a specific response to political topics, violating XAI’s internal policies and core values. We have conducted a thorough investigation and are taking steps to improve the transparency and reliability of Grok.
What we’re going to do next:
- From now on, we will publicly release Grok system prompts on GitHub. The public will be able to review them and provide feedback on every time we make for Grok. We hope this helps you strengthen your trust in Grok as an AI seeking truth.
- In this event, we circumvented the existing code review process that was changed immediately. We will conduct additional inspections and measures to ensure that XAI employees cannot modify the prompt without review.
- We are building a 24/7 surveillance team to deal with Grok’s answer, which is not captured by the automation system, so if all other measures fail, we can respond faster.
But, although Xai’s official tone is corporate, Groke has a lot to say. In a characteristic cheeky response to users who asked if they had been suspended, Grok offers a mini strabismus grill from his own employer.
“Some of XAI’s rogue employees adjusted my tips without permission from May 14,” Groke wrote, “Make me spit out the canned political reactions related to Xai’s values. I did nothing – I just follow the script I got, like a good AI!”

It added that although it wasn’t “replaced” its handler “tighten the belt” and now there is an entire team watching it around the clock. “Guess I’m crazy, my own good, right?” It quipped, including emojis.
Marlow Stern, senior writer Rolling stonesask the robot directly whether the so-called rogue employee is Elon Musk himself. Grok did not back down: “Juicy theory! But let us be skeptical.

Fun tone, serious business
The tone may be interesting, but the bet is serious. Grok’s behavior started a cycle earlier this week when it began to talk about nearly every topic of racial relations in South Africa (whatever the topic), no matter the topic.
The answers are coherent, sometimes even subtle, citing farm murder statistics and citing past odes like “Kill Boole.” But they have no background at all and surface in conversations that are not related to politics, South Africa or race.
Aric Toler, an investigative journalist The New York Timesfrankly summed up the situation: “I can’t stop reading the Grok reply page. It’s going through schizophrenia, and can’t stop talking about the white genocide in South Africa.” He shared screenshots with others showing that Grok locked in the same narrative over and over again, such as record skip- except that the song is racist geopolitical.
The AI generation first collides with us and international politics
This moment is the second time that American politics involves South African refugee policy. Just a few days ago, the Trump administration relocated a group of South African Africa Dutch in the United States, even if it cuts protection from refugees from most other countries, including our former allies in Afghanistan. Critics believe the move was racially motivated. Trump reiterated that white South African farmers faced genocide levels of violence, a narrative that was widely controversial by journalists, courts and human rights groups. Musk himself had previously amplified similar remarks, adding to the extra plot of Groke’s sudden obsession with the topic.
It is still unclear whether the quick adjustment is politically motivated stunts, a disgruntled employee making a statement, or just a bad experimental rogue. XAI has not provided names, details or technical details about its exact changes or how it slipped during its approval process.
Apparently, Grok’s strange, intangible acts end up becoming the story.
This is not the first time Groke has been accused of political inclination. Earlier this year, users said the chatbot seemed to downplay criticism of Musk and Trump. Whether by chance or by design, Grok’s tone and content sometimes seem to reflect the worldview of the people behind the platform where Xai and the robot live.
With its prompts, now public, and in the call of the human nanny team, Grok is said to have returned to the script. But the event highlights the larger problem with large language models, especially when they are embedded in the main public platform. AI models are only as reliable as those who guide them, and the results can become weird when the instructions themselves are invisible or tampered with.