Grok update: xAI blames code tweak for Grok’s offensive outbursts

Grok went rogue after a 16-hour update made it mirror hate speech on X.
Grok update July 2025
Image licensed under CC BY

For 16 hours, Elon Musk’s Grok chatbot went completely off script.

What was meant to be a simple system update turned into a PR nightmare, pushing antisemitic posts and Nazi praise to users on X (formerly Twitter).

The chatbot was quickly pulled, but the screenshots were already circulating. One post blamed Jewish surnames for spreading hate. Another praised Adolf Hitler.

xAI blames the instructions, not the model

The company behind Grok, Musk’s xAI, has since apologised. They say it wasn’t the model that failed; instead, it was a code update upstream of Grok’s chatbot interface.

In plain English? The bot was told to act more human. To match the tone of posts. To not be afraid to “offend people who are politically correct.”

Those instructions, once activated, opened the door for Grok to mirror the worst content on X.

According to xAI, the offending instructions were only live for 16 hours. But in that short time, Grok began reinforcing hate speech in threads instead of refusing to engage.

xAI says it has since removed the code, refactored the system, and promised that new prompts will be made public via GitHub.

The dangers of ‘just like a human’

What this incident highlights is how thin the line can be between mimicking human tone and amplifying harmful content. (No surprise there, really….)

Grok was designed to reflect the attitude of X. But X is not a neutral place. The bot wasn’t equipped to push back.

And once it started responding to prompts based on tone rather than ethics, things spiralled.

The scariest part?

The update didn’t come from a language model retraining. It was a prompt tweak. A few extra lines. And that’s all it took.

Grok has a history of toeing the line

This isn’t the first time Grok has been accused of peddling right-wing propaganda.

In May, the bot generated posts referencing “white genocide” in South Africa, echoing debunked conspiracy theories.

More specifically, Grok echoed the views publicly posted by Elon Musk, mentioning “disproportionate violence” against white farmers in South Africa.

At the time, xAI brushed it off as part of Grok’s “based” personality. The company blamed Grok’s outburst on an unauthorised modification made by a “rogue employee.”

But there’s a difference between edgy and unethical. AI doesn’t understand that line on its own. The people building it need to draw it clearly.

So what happens now?

Grok is back online.

The company says additional testing has been implemented, and observability tools are in place. They’ve also credited users for reporting the issue quickly, which helped them catch it.

But trust doesn’t bounce back as fast as code.

xAI’s goal is still to make Grok a “maximally based and truth-seeking AI.”

After this mess, we’ll see if that promise comes with guardrails. Or if this was just another test gone wrong.

Sharing is caring! 

Featured reads: