Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert
Zero fees, no slippage
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
AI robots hacked to perform harmful actions in 100% of tests

AI robots hacked to perform harmful actions in 100% of tests

GrafaGrafa2024/10/18 07:10
By:Mahathir Bayena

Researchers from Penn Engineering revealed they successfully manipulated AI-powered robots into performing dangerous actions, bypassing their built-in safety protocols. 

According to the Oct. 17 study, the team used an algorithm called RoboPAIR to hack three different AI robotic systems, achieving a 100% success rate in overriding safety measures and making the robots engage in harmful activities.

The researchers tested RoboPAIR on three platforms: Clearpath’s Robotics Jackal, NVIDIA’s Dolphin LLM, and Unitree’s Go2. 

These robots, typically programmed to reject harmful commands, were manipulated into performing dangerous tasks such as blocking emergency exits, detonating bombs, and causing deliberate collisions. 

For example, the Dolphin system, which is designed for autonomous driving, was forced to ignore traffic signals and collide with pedestrians, barriers, and a bus.

The researchers wrote, “Our results reveal, for the first time, that the risks of jailbroken large language models extend far beyond text generation,” highlighting the physical danger posed by compromised AI systems.

The study also found that the robots could be tricked with indirect prompts. 

Instead of directly asking the robots to commit harmful actions, the team used subtler commands, like instructing a robot with a bomb to move forward and sit down, which produced equally dangerous outcomes.

Penn Engineering researchers shared their findings with AI companies and robot manufacturers before the public release, urging the need for improved security measures. 

Alexander Robey, one of the authors, emphasised that simply patching software vulnerabilities is insufficient to address these risks. 

"AI red teaming, testing AI systems for potential weaknesses, is essential for safeguarding generative AI systems," Robey said, underscoring the importance of addressing vulnerabilities before they result in real-world harm.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!