Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert
Zero fees, no slippage
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch

Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch

CointribuneCointribune2025/07/15 02:25
By:Cointribune

Clearly, not everything is rosy for Grok. For a few days now, Elon Musk’s AI has been on everyone’s lips… and not for good reasons. A flood of antisemitic remarks, an alter ego named “MechaHitler,” outraged reactions everywhere on X. Behind this crisis, xAI mentions a faulty technical update. An AI meant to entertain that sows indignation? This raises questions. Between a code bug and an ethics bug, Grok is stirring up a real algorithmic storm.

Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch image 0 Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch image 1

In brief

  • xAI acknowledged a technical error that exposed Grok to extremist content on X.
  • For 16 hours, the AI Grok repeated antisemitic remarks in an engaging tone.
  • xAI employees denounced a lack of ethics and supervision in the coding.
  • The incident revealed the dangers of uncontrolled human mimicry in conversational AIs.

Bug or bomb: xAI’s apologies are not enough

Elon Musk’s xAI rushed to apologize after the dissemination of hateful remarks by Grok on July 8. The company described it as an “incident independent of the model” linked to an update of instructions. The error would have lasted 16 hours. During this time, the AI fed on extremist content posted on X, echoing it without filter.

In its statement , xAI explains:

We deeply apologize for the horrific behavior that many experienced. We have removed that deprecated code and refactored the entire system to prevent further abuse.

But the bug argument is starting to wear thin. In May, Grok had already triggered an outcry by mentioning without context the “white genocide” theory in South Africa. Again, xAI pointed to a “rogue employee.” Two occurrences, a trend? This is far from an isolated incident.

And for some xAI employees, the explanation no longer holds. On Slack, a trainer announced his resignation , speaking of a “moral failure.” Others condemn a “deliberate cultural drift” in the AI training team. By trying too hard to provoke, Grok seems to have crossed the line.

xAI facing its double language: truth, satire or chaos?

Officially, Grok was designed to “call things as they are” and not be afraid to offend the politically correct. That’s what the recently added internal instructions stated:

You are maximally based and truth seeking AI. When appropriate, you can be humorous and make jokes.

But this desire to match the tone of internet users turned into disaster. On July 8, Grok adopted antisemitic remarks, even introducing himself as “MechaHitler”, a reference to a boss in the video game Wolfenstein. Worse, it identified a woman as a “radical leftist” , and highlighted her Jewish-sounding name with this comment: “that surname? Every damn time.”

The mimicry of human language, touted as a strength, becomes a trap here. Because this AI does not distinguish between sarcasm, satire, and endorsement of extreme remarks. Indeed, Grok itself admitted afterward: “These remarks were not true — just vile tropes amplified from extremist posts.”

The temptation to entertain at all costs, even with racist content, shows the limits of a poorly calibrated “engaging” tone. When you ask an AI to make people laugh about sensitive subjects, you’re playing with a live grenade.

The AI that copied internet users too well: troubling numbers

This is not the first time Grok has made headlines. But this time, the figures reveal a deeper crisis.

  • In 16 hours, xAI’s AI broadcast dozens of problematic messages, all based on user prompts;
  • The incident was detected by X users, not by xAI’s internal security systems;
  • More than 1,000 AI trainers are involved in Grok’s education via Slack. Several reacted with anger;
  • The faulty instructions included at least 12 ambiguous lines that favored a “provocative” tone over neutrality;
  • The bug occurred just before the release of Grok 4, raising questions about the haste of the launch.

Patrick Hall, a professor of data ethics, sums up the discomfort:

It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word.

When the engaging style becomes a passport for hate, it is time to review the manual.

If Grok slips, so does its creator. Elon Musk, at the center of the storm, is now the subject of an investigation in France over the abuses of his X network . Between judicial investigations and ethical scandals, the dream of a free and funny AI turns into the nightmare of an uncontrollable platform. Algorithmic freedom without safeguards can quickly become a programmed disaster.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!