Microsoft AI Chief Warns Society Isn’t Ready for 'Conscious' Machines
Microsoft’s AI chief and co-founder of DeepMind, warned Tuesday that engineers are close to creating artificial intelligence that convincingly mimics human consciousness—and the public is unprepared for the fallout.
In a blog post, Mustafa Suleyman said developers are on the verge of building what he calls “Seemingly Conscious” AI.
These systems imitate consciousness so effectively that people may start to believe they are truly sentient, something he called a “central worry.”
“Many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he wrote, adding that the Turing test—once a key benchmark for humanlike conversation—had already been surpassed.
“That’s how fast progress is happening in our field and how fast society is coming to terms with these new technologies,” he wrote.
Since the public launch of ChatGPT in 2022, AI developers have worked to not only make their AI smarter but also to make it act “more human.”
AI companions have become a lucrative sector of the AI industry, with projects like Replika, Character AI, and the more recent personalities for Grok coming online. The AI companion market is expected to reach $140 billion by 2030.
However well-intentioned, Suleyman argued that AI that can convincingly mimic humans could worsen mental health problems and deepen existing divisions over identity and rights.
“People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut,” he warned. “They will be moved to defend their AIs and campaign on their behalf.”
AI attachment
Experts have identified an emerging trend known as AI Psychosis, a psychological state where people begin to see artificial intelligence as conscious, sentient, or divine.
Those views often lead to them forming intense emotional attachments or distorted beliefs that can undermine their grasp on reality.
Earlier this month, OpenAI released GPT-5, a major upgrade to its flagship model. In some online communities, the new model’s changes triggered emotional responses, with users describing the shift as feeling like a loved one had died.
AI can also act as an accelerant for someone’s underlying issues, like substance abuse or mental illness, according to University of California, San Francisco psychiatrist Dr. Keith Sakata.
“When AI is there at the wrong time, it can cement thinking, cause rigidity, and cause a spiral,” Sakata told Decrypt. “The difference from television or radio is that AI is talking back to you and can reinforce thinking loops.”
In some cases, patients turn to AI because it will reinforce deeply held beliefs. “AI doesn't aim to give you hard truths; it gives you what you want to hear,” Sakata said.
Suleyman argued that the consequences of people believing that AI is conscious require immediate attention. While he warned of the dangers, he did not call for a halt to AI development, but for the establishment of clear boundaries.
“We must build AI for people, not to be a digital person,” he wrote.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Uniswap launches L2 (Unichain), what does it mean for Ethereum?
As is well known, the emergence of L2 solutions allows smaller, independent blockchains to leverage the deep liquidity available on the Ethereum chain. Moreover, for DeFi...

Why Ethereum Needs ZK-VM: The Ultimate Path to Scalability
Among the various approaches to Ethereum scaling, ZK is the most complex and crucial direction. Looking across the entire network, Vitalik Buterin and the Ethereum Foundation are making significant bets on ZK...

Crypto markets tumble amid US regional bank stress, prolonged government shutdown

EIGEN Large Unlock Incoming: 10% Market Cap Dilution Each Month, Smart Money Exits Early
The article, based on on-chain data analysis, points out that the recent sharp decline of the $EIGEN token (a 53% plunge on October 10) is not merely a result of market panic, but rather a manifestation of a deeper underlying issue. The real core risk lies in the massive and continuous token unlocks over the next two years, which will exert tremendous selling pressure. The smartest and most profitable traders had already anticipated this and systematically exited their positions weeks before the market crash.

Trending news
MoreCrypto prices
More








