Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert
Zero fees, no slippage
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
Artificial Intelligence is being used to manipulate elections, OpenAI raises alarm

Artificial Intelligence is being used to manipulate elections, OpenAI raises alarm

CryptopolitanCryptopolitan2024/10/10 22:48
By:By Hannah Collymore

Share link:In this post: OpenAI stated in its latest report that its chatbot has become a leading medium for creating misinformation ahead of national elections. OpenAI emphasizes the need for strong security measures and robust regulatory frameworks to slow down AI-powered state-sponsored election manipulation campaigns. As of now, AI-generated content has not been convincing enough to sway political opinions.

OpenAI’s report stated that its models are being used to influence elections. It also stated that it had taken down over 20 operations that relied on its AI model to carry out such malicious activities. 

The OpenAI report , “An update on disrupting deceptive uses of AI,” also emphasized the need for vigilance when engaging with political content.

The document showed a trend with OpenAI’s models becoming a major tool for disrupting elections and spreading political misinformation. Bad actors, who are often state-sponsored, use these AI models for various activities, including generating content for fake personas on social media and malware reverse engineering.

OpenAI’s growing influence in AI elections and politics

In late August, OpenAI disrupted an Iranian campaign that was producing social media content to sway opinions in US elections, Venezuelan politics, the Gaza conflict, and Israel. It reported that some accounts, which were subsequently banned, were also posting about Rwandan elections. 

It also found that an Israeli company was also involved in trying to manipulate poll results in India. 

However, OpenAI noted that these activities have not gone viral or cultivated substantial audiences. Social media posts related to these campaigns gained minimal traction. This could indicate the difficulty in swaying public opinion through AI-powered misinformation campaigns. 

See also Analysts warn speculators to brace for underwhelming Tesla robotaxi reveal

Historically, political campaigns are usually fueled by misinformation from the running sides. However, the advent of AI presents a different threat to the integrity of political systems. The World Economic Forum (WEF) stated that 2024 is a historic year for elections, with 50 countries having elections.  

LLMs in everyday use already have the capability to create and spread misinformation faster and more convincingly.

Regulation and collaborative efforts

In response to this potential threat, OpenAI said it is working with relevant stakeholders by sharing threat intelligence. It expects this collaborative approach to be sufficient in policing misinformation channels and fostering ethical AI use, especially in political contexts. 

OpenAI reports, “Notwithstanding the lack of meaningful audience engagement resulting from this operation, we take seriously any efforts to use our services in foreign influence operations.” 

The AI firm also stressed that robust security defenses must be built to prevent state-sponsored cyber attackers, who use AI to create deceptive and disruptive online campaigns. 

The WEF has also highlighted the need to put AI regulations in place, saying, “International agreements on interoperable standards and baseline regulatory requirements will play an important part in enabling innovation and improving AI safety.” 

Developing effective frameworks requires strategic partnerships between tech companies such as OpenAI, the public sector, and private stakeholders, which will help implement ethical AI systems.

See also AI boom sends Nvidia up 25% in a month
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!