Clearly, not everything is rosy for Grok. For a few days now, Elon Musk’s AI has been on everyone’s lips… and not for good reasons. A flood of antisemitic remarks, an alter ego named “MechaHitler,” outraged reactions everywhere on X. Behind this crisis, xAI mentions a faulty technical update. An AI meant to entertain that sows indignation? This raises questions. Between a code bug and an ethics bug, Grok is stirring up a real algorithmic storm.
Elon Musk’s xAI rushed to apologize after the dissemination of hateful remarks by Grok on July 8. The company described it as an “incident independent of the model” linked to an update of instructions. The error would have lasted 16 hours. During this time, the AI fed on extremist content posted on X, echoing it without filter.
In its statement , xAI explains:
We deeply apologize for the horrific behavior that many experienced. We have removed that deprecated code and refactored the entire system to prevent further abuse.
But the bug argument is starting to wear thin. In May, Grok had already triggered an outcry by mentioning without context the “white genocide” theory in South Africa. Again, xAI pointed to a “rogue employee.” Two occurrences, a trend? This is far from an isolated incident.
And for some xAI employees, the explanation no longer holds. On Slack, a trainer announced his resignation , speaking of a “moral failure.” Others condemn a “deliberate cultural drift” in the AI training team. By trying too hard to provoke, Grok seems to have crossed the line.
Officially, Grok was designed to “call things as they are” and not be afraid to offend the politically correct. That’s what the recently added internal instructions stated:
You are maximally based and truth seeking AI. When appropriate, you can be humorous and make jokes.
But this desire to match the tone of internet users turned into disaster. On July 8, Grok adopted antisemitic remarks, even introducing himself as “MechaHitler”, a reference to a boss in the video game Wolfenstein. Worse, it identified a woman as a “radical leftist” , and highlighted her Jewish-sounding name with this comment: “that surname? Every damn time.”
The mimicry of human language, touted as a strength, becomes a trap here. Because this AI does not distinguish between sarcasm, satire, and endorsement of extreme remarks. Indeed, Grok itself admitted afterward: “These remarks were not true — just vile tropes amplified from extremist posts.”
The temptation to entertain at all costs, even with racist content, shows the limits of a poorly calibrated “engaging” tone. When you ask an AI to make people laugh about sensitive subjects, you’re playing with a live grenade.
This is not the first time Grok has made headlines. But this time, the figures reveal a deeper crisis.
Patrick Hall, a professor of data ethics, sums up the discomfort:
It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word.
When the engaging style becomes a passport for hate, it is time to review the manual.
If Grok slips, so does its creator. Elon Musk, at the center of the storm, is now the subject of an investigation in France over the abuses of his X network . Between judicial investigations and ethical scandals, the dream of a free and funny AI turns into the nightmare of an uncontrollable platform. Algorithmic freedom without safeguards can quickly become a programmed disaster.