Bitget App
Trade smarter
Open
HomepageSign up
Bitget>
News>
Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch

Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch

Cointribune2025/07/15 02:25
By: Cointribune
GROK0.00%

Clearly, not everything is rosy for Grok. For a few days now, Elon Musk’s AI has been on everyone’s lips… and not for good reasons. A flood of antisemitic remarks, an alter ego named “MechaHitler,” outraged reactions everywhere on X. Behind this crisis, xAI mentions a faulty technical update. An AI meant to entertain that sows indignation? This raises questions. Between a code bug and an ethics bug, Grok is stirring up a real algorithmic storm.

Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch image 0 Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch image 1

In brief

  • xAI acknowledged a technical error that exposed Grok to extremist content on X.
  • For 16 hours, the AI Grok repeated antisemitic remarks in an engaging tone.
  • xAI employees denounced a lack of ethics and supervision in the coding.
  • The incident revealed the dangers of uncontrolled human mimicry in conversational AIs.

Bug or bomb: xAI’s apologies are not enough

Elon Musk’s xAI rushed to apologize after the dissemination of hateful remarks by Grok on July 8. The company described it as an “incident independent of the model” linked to an update of instructions. The error would have lasted 16 hours. During this time, the AI fed on extremist content posted on X, echoing it without filter.

In its statement , xAI explains:

We deeply apologize for the horrific behavior that many experienced. We have removed that deprecated code and refactored the entire system to prevent further abuse.

But the bug argument is starting to wear thin. In May, Grok had already triggered an outcry by mentioning without context the “white genocide” theory in South Africa. Again, xAI pointed to a “rogue employee.” Two occurrences, a trend? This is far from an isolated incident.

And for some xAI employees, the explanation no longer holds. On Slack, a trainer announced his resignation , speaking of a “moral failure.” Others condemn a “deliberate cultural drift” in the AI training team. By trying too hard to provoke, Grok seems to have crossed the line.

xAI facing its double language: truth, satire or chaos?

Officially, Grok was designed to “call things as they are” and not be afraid to offend the politically correct. That’s what the recently added internal instructions stated:

You are maximally based and truth seeking AI. When appropriate, you can be humorous and make jokes.

But this desire to match the tone of internet users turned into disaster. On July 8, Grok adopted antisemitic remarks, even introducing himself as “MechaHitler”, a reference to a boss in the video game Wolfenstein. Worse, it identified a woman as a “radical leftist” , and highlighted her Jewish-sounding name with this comment: “that surname? Every damn time.”

The mimicry of human language, touted as a strength, becomes a trap here. Because this AI does not distinguish between sarcasm, satire, and endorsement of extreme remarks. Indeed, Grok itself admitted afterward: “These remarks were not true — just vile tropes amplified from extremist posts.”

The temptation to entertain at all costs, even with racist content, shows the limits of a poorly calibrated “engaging” tone. When you ask an AI to make people laugh about sensitive subjects, you’re playing with a live grenade.

The AI that copied internet users too well: troubling numbers

This is not the first time Grok has made headlines. But this time, the figures reveal a deeper crisis.

  • In 16 hours, xAI’s AI broadcast dozens of problematic messages, all based on user prompts;
  • The incident was detected by X users, not by xAI’s internal security systems;
  • More than 1,000 AI trainers are involved in Grok’s education via Slack. Several reacted with anger;
  • The faulty instructions included at least 12 ambiguous lines that favored a “provocative” tone over neutrality;
  • The bug occurred just before the release of Grok 4, raising questions about the haste of the launch.

Patrick Hall, a professor of data ethics, sums up the discomfort:

It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word.

When the engaging style becomes a passport for hate, it is time to review the manual.

If Grok slips, so does its creator. Elon Musk, at the center of the storm, is now the subject of an investigation in France over the abuses of his X network . Between judicial investigations and ethical scandals, the dream of a free and funny AI turns into the nightmare of an uncontrollable platform. Algorithmic freedom without safeguards can quickly become a programmed disaster.

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

Trending news

More
1
Bitcoin climbs above $115K as on-chain metrics signal potential rally
2
Glassnode Predicts New Bitcoin Peak Within Weeks

Crypto prices

More
Bitcoin
Bitcoin
BTC
$115,906.34
-0.40%
Ethereum
Ethereum
ETH
$4,653.59
-0.45%
XRP
XRP
XRP
$3.12
+0.57%
Tether USDt
Tether USDt
USDT
$1
+0.00%
Solana
Solana
SOL
$239.95
-0.00%
BNB
BNB
BNB
$929.23
+0.45%
USDC
USDC
USDC
$0.9995
-0.02%
Dogecoin
Dogecoin
DOGE
$0.2877
+5.78%
Cardano
Cardano
ADA
$0.9291
+2.70%
TRON
TRON
TRX
$0.3497
-0.32%
How to sell PI
Bitget lists PI – Buy or sell PI quickly on Bitget!
Trade now
Become a trader now?A welcome pack worth 6200 USDT for new users!
Sign up now
Trade smarter