Sun. Feb 8th, 2026
Reader Mode

State-backed hackers from Russia, China, and Iran have reportedly leveraged tools from Microsoft-backed OpenAI to enhance their hacking capabilities and deceive their targets, as outlined in a report published on Wednesday.

Microsoft (MSFT.O) revealed that it had monitored hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments as they utilized large language models to refine their hacking strategies.

These models, often referred to as artificial intelligence, utilize vast amounts of text data to generate responses that closely resemble human speech.

The report delineated distinct approaches adopted by hacking groups utilizing large language models. For instance, hackers associated with Russia’s GRU allegedly researched satellite and radar technologies relevant to military operations in Ukraine. North Korean hackers employed these models to create content for spear-phishing campaigns against regional experts, while Iranian hackers utilized them to craft more convincing emails, including attempts to lure prominent feminists to malicious websites.

In response to these findings, Microsoft announced a comprehensive ban on state-backed hacking groups from accessing its AI products, as Microsoft’s Vice President for Customer Security, Tom Burt, emphasized the company’s proactive stance, stating that irrespective of legal or terms of service violations, they aimed to prevent identified threat actors from utilizing this technology.

Officials from Russia, North Korea, and Iran have not yet responded to requests for comment on the allegations. However, China’s U.S. embassy spokesperson, Liu Pengyu, voiced opposition to what they deemed as baseless accusations against China, advocating for the responsible deployment of AI technology to benefit humanity.

The revelation that state-backed hackers have employed AI tools to bolster their espionage capabilities demonstrates growing concerns surrounding the widespread adoption of such technology and its potential for misuse. Western cybersecurity authorities have long warned about the misuse of AI by malicious actors, although detailed instances have been scarce until now.

Bob Rotsted, who heads cybersecurity threat intelligence at OpenAI, noted the significance of this disclosure, marking one of the first instances of an AI company publicly addressing how cybersecurity threat actors exploit AI technologies. Both OpenAI and Microsoft characterized the hackers’ utilization of their AI tools as “early-stage” and “incremental,” without any reported breakthroughs.

Microsoft also identified Chinese state-backed hackers experimenting with large language models, employing them to inquire about rival intelligence agencies, cybersecurity matters, and notable individuals. However, neither Burt nor Rotsted disclosed the extent of activity or the number of accounts suspended.

Burt defended the zero-tolerance ban on hacking groups, highlighting the novelty and potency of AI technology and the associated concerns regarding its deployment. He emphasized the need for vigilance and proactive measures to mitigate potential risks associated with AI in cybersecurity.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *

×