New malware uses AI to adapt during attacks, report finds

State-backed hackers are for the first time deploying malware that uses large language models during execution, allowing them to dynamically generate malicious scripts and evade detection, according to new research.

Although cybersecurity experts have observed hackers use AI in recent years to do things like increase the number of victims they reach, researchers at Google said Wednesday that they recently observed malware “that employed AI capabilities mid-execution to dynamically alter the malware’s behavior.”

The trend should be considered a “significant step towards more autonomous and adaptive malware,” the report says.

In June, researchers found experimental dropper malware tracked as PROMPTFLUX that prompts an LLM to rewrite its own source code in order to evade detection.

PROMPTFLUX, which Google said it has taken steps to disrupt, appears to be in a testing phase and does not have the ability to compromise victim networks or devices, according to the report.

Another new malware, tracked as PROMPTSTEAL, was used in June by Russia-linked APT28 (also known as BlueDelta, Fancy Bear and FROZENLAKE) against Ukrainian targets, and utilized LLMs to generate commands rather than having them hard-coded into the malware. The incident marked Google’s “first observation of malware querying a LLM deployed in live operations,” the report said.

While researchers called these methods experimental, they said they show how threats are changing and how threat actors can “potentially integrate AI capabilities into future intrusion activity.” 

“Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support,” the report says.

The marketplace for AI tools “purpose-built” to fuel criminal behavior is growing, the report added. Low-level criminals without a lot of technical expertise or money can now find effective tools in underground forums for enhancing the complexity and reach of attacks, according to the report.

“Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.

 

Total
0
Shares
Previous Post

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

Next Post

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Related Posts

LockBit, Qilin, and DragonForce Join Forces to Dominate the Ransomware Ecosystem

Three prominent ransomware groups DragonForce, LockBit, and Qilin have announced a new strategic ransomware alliance, once underscoring continued shifts in the cyber threat landscape. The coalition is seen as an attempt on the part of the financially motivated threat actors to conduct more effective ransomware attacks, ReliaQuest said in a report shared with The Hacker News. "Announced shortly
Read More

3,000 YouTube Videos Exposed as Malware Traps in Massive Ghost Network Operation

A malicious network of YouTube accounts has been observed publishing and promoting videos that lead to malware downloads, essentially abusing the popularity and trust associated with the video hosting platform for propagating malicious payloads. Active since 2021, the network has published more than 3,000 malicious videos to date, with the volume of such videos tripling since the start of the
Read More