OpenAI and Microsoft join forces to prevent state-linked cyberattacks
OpenAI, the developer behind artificial intelligence (AI) chatbot ChatGPT, has collaborated with its top investor, Microsoft, to prevent five cyberattacks linked to different malicious actors.
According to a report released on Wednesday, Microsoft has monitored hacking groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments, which it says have been exploring using AI large language models (LLMs) in their hacking strategies.
LLMs utilize vast text data sets to create responses that sound human-like.
We disrupted five state-affiliated malicious cyber actors’ use of our platform.
— OpenAI (@OpenAI) February 14, 2024
Work done in collaboration with Microsoft Threat Intelligence Center. https://t.co/xpEeQDYjrQ
OpenAI reported that the five cyberattacks originated from two groups associated with China: Charcoal Typhoon and Salmon Typhoon. Attacks were linked to Iran through Crimson Sandstorm, North Korea through Emerald Sleet and Russia through Forest Blizzard.
The groups tried to employ ChatGPT-4 for researching company and cybersecurity tools, debugging code, generating scripts, conducting phishing campaigns, translating technical papers, evading malware detection and studying satellite communication and radar technology, according to OpenAI. The accounts were deactivated upon detection.
The company revealed the discovery while implementing a blanket ban on state-backed hacking groups utilizing AI products. While OpenAI effectively prevented these occurrences, it acknowledged the challenge of avoiding every malicious use of its programs.
Related: OpenAI gives ChatGPT a memory: No more goldfish brain?
Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers. In June 2023, OpenAI announced a $1 million cybersecurity grant program to enhance and measure the impact of AI-driven cybersecurity technologies.
Despite OpenAI’s efforts in cybersecurity and implementing safeguards to prevent ChatGPT from generating harmful or inappropriate responses, hackers have found methods to bypass these measures and manipulate the chatbot to produce such content.
More than 200 entities, including OpenAI, Microsoft, Anthropic and Google, recently collaborated with the Biden Administration to establish the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). The groups were established as a result of President Joe Biden’s executive order on AI safety in late October 2023, which aims to promote the safe development of AI, combat AI-generated deepfakes and address cybersecurity issues.
Magazine: ChatGPT trigger happy with nukes, SEGA’s 80s AI, TAO up 90%: AI Eye
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Ethereum Classic ($ETC) Eyes a 510% Breakout Rally
Ethereum Classic could surge over 510% to $127.65 as a major breakout pattern emerges.Why the $127.65 Target MattersShould You Pay Attention?

Doginme Made Early Buyers Rich—Now Arctic Pablo Coin Is Leading the Next Meme Coin Gold Rush
While Doginme showed how fast gains can arrive in meme markets, Arctic Pablo Coin is showing how they can be structured, incentivized, and sustainable.Doginme: The Meme Token That Ran With the Big DogsArctic Pablo Coin’s 66% APY: The Meme Coin Presale With Real UtilityIceberg Isle and the Numbers That Matter: $0.000125 Entry, 6,300% ROIArctic Pablo Coin Is Built for the Long Run: Why It Belongs Among the Top New Meme Coins for Exponential Returns

Goldman Sachs Increases Bitcoin ETF Holdings, Surpasses BlackRock

Ethereum’s Rise to $2.5K Sparks Renewed Altcoin Interest

Trending news
MoreCrypto prices
More








