New malware uses AI to adapt during attacks, report finds

State-backed hackers are for the first time deploying malware that uses large language models during execution, allowing them to dynamically generate malicious scripts and evade detection, according to new research.

Although cybersecurity experts have observed hackers use AI in recent years to do things like increase the number of victims they reach, researchers at Google said Wednesday that they recently observed malware “that employed AI capabilities mid-execution to dynamically alter the malware’s behavior.”

The trend should be considered a “significant step towards more autonomous and adaptive malware,” the report says.

In June, researchers found experimental dropper malware tracked as PROMPTFLUX that prompts an LLM to rewrite its own source code in order to evade detection.

PROMPTFLUX, which Google said it has taken steps to disrupt, appears to be in a testing phase and does not have the ability to compromise victim networks or devices, according to the report.

Another new malware, tracked as PROMPTSTEAL, was used in June by Russia-linked APT28 (also known as BlueDelta, Fancy Bear and FROZENLAKE) against Ukrainian targets, and utilized LLMs to generate commands rather than having them hard-coded into the malware. The incident marked Google’s “first observation of malware querying a LLM deployed in live operations,” the report said.

While researchers called these methods experimental, they said they show how threats are changing and how threat actors can “potentially integrate AI capabilities into future intrusion activity.” 

“Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support,” the report says.

The marketplace for AI tools “purpose-built” to fuel criminal behavior is growing, the report added. Low-level criminals without a lot of technical expertise or money can now find effective tools in underground forums for enhancing the complexity and reach of attacks, according to the report.

“Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.

 

Total
0
Shares
Previous Post

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Next Post

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Related Posts

CometJacking: One Click Can Turn Perplexity’s Comet AI Browser Into a Data Thief

Cybersecurity researchers have disclosed details of a new attack called CometJacking targeting Perplexity's agentic AI browser Comet by embedding malicious prompts within a seemingly innocuous link to siphon sensitive data, including from connected services, like email and calendar. The sneaky prompt injection attack plays out in the form of a malicious link that, when clicked, triggers the
Read More

GootLoader Is Back, Using a New Font Trick to Hide Malware on WordPress Sites

The malware known as GootLoader has resurfaced yet again after a brief spike in activity earlier this March, according to new findings from Huntress. The cybersecurity company said it observed three GootLoader infections since October 27, 2025, out of which two resulted in hands-on keyboard intrusions with domain controller compromise taking place within 17 hours of initial infection. "
Read More

RMPocalypse: Single 8-Byte Write Shatters AMD’s SEV-SNP Confidential Computing

Chipmaker AMD has released fixes to address a security flaw dubbed RMPocalypse that could be exploited to undermine confidential computing guarantees provided by Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP). The attack, per ETH Zürich researchers Benedict Schlüter and Shweta Shinde, exploits AMD's incomplete protections that make it possible to perform a single memory
Read More