OpenAI Blocks 20 Global Malicious Campaigns Using AI for Cybercrime and Disinformation

Avatar
OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year. This activity encompassed debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X. “Threat
[[{“value”:”

OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year.

This activity encompassed debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X.

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the artificial intelligence (AI) company said.

It also said it disrupted activity that generated social media content related to elections in the U.S., Rwanda, and to a lesser extent India and the European Union, and that none of these networks attracted viral engagement or sustained audiences.

This included efforts undertaken by an Israeli commercial company named STOIC (also dubbed Zero Zeno) that generated social media comments about Indian elections, as previously disclosed by Meta and OpenAI earlier this May.

Some of the cyber operations highlighted by OpenAI are as follows –

SweetSpecter, a suspected China-based adversary that leveraged OpenAI’s services for LLM-informed reconnaissance, vulnerability research, scripting support, anomaly detection evasion, and development. It has also been observed conducting unsuccessful spear-phishing attempts against OpenAI employees to deliver the SugarGh0st RAT.
Cyber Av3ngers, a group affiliated with the Iranian Islamic Revolutionary Guard Corps (IRGC) used its models to conduct research into programmable logic controllers.
Storm-0817, an Iranian threat actor used its models to debug Android malware capable of harvesting sensitive information, tooling to scrape Instagram profiles via Selenium, and translating LinkedIn profiles into Persian.

Elsewhere, the company said it took steps to block several clusters, including an influence operation codenamed A2Z and Stop News, of accounts that generated English- and French-language content for subsequent posting on a number of websites and social media accounts across various platforms.

“[Stop News] was unusually prolific in its use of imagery,” researchers Ben Nimmo and Michael Flossman said. “Many of its web articles and tweets were accompanied by images generated using DALL·E. These images were often in cartoon style, and used bright color palettes or dramatic tones to attract attention.”

Two other networks identified by OpenAI Bet Bot and Corrupt Comment have been found to use their API to generate conversations with users on X and send them links to gambling sites, as well as manufacture comments that were then posted on X, respectively.

The disclosure comes nearly two months after OpenAI banned a set of accounts linked to an Iranian covert influence operation called Storm-2035 that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election.

“Threat actors most often used our models to perform tasks in a specific, intermediate phase of activity — after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware across the internet via a range of distribution channels,” Nimmo and Flossman wrote.

Cybersecurity company Sophos, in a report published last week, said generative AI could be abused to disseminate tailored misinformation by means of microtargeted emails.

This entails abusing AI models to concoct political campaign websites, AI-generated personas across the political spectrum, and email messages that specifically target them based on the campaign points, thereby allowing for a new level of automation that makes it possible to spread misinformation at scale.

“This means a user could generate anything from benign campaign material to intentional misinformation and malicious threats with minor reconfiguration,” researchers Ben Gelman and Adarsh Kyadige said.

“It is possible to associate any real political movement or candidate with supporting any policy, even if they don’t agree. Intentional misinformation like this can make people align with a candidate they don’t support or disagree with one they thought they liked.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

“}]] The Hacker News 

Total
0
Shares
Previous Post

Experts Warn of Critical Unpatched Vulnerability in Linear eMerge E3 Systems

Next Post

OpenAI disrupts 20 campaigns to misuse its tech as federal officials mull international use of AI

Related Posts

Critical WordPress Anti-Spam Plugin Flaws Expose 200,000+ Sites to Remote Attacks

Two critical security flaws impacting the Spam protection, Anti-Spam, and FireWall plugin WordPress could allow an unauthenticated attacker to install and enable malicious plugins on susceptible sites and potentially achieve remote code execution. The vulnerabilities, tracked as CVE-2024-10542 and CVE-2024-10781, carry a CVSS score of 9.8 out of a maximum of 10.0. They were addressed in versions
Avatar
Read More

Critical Linux CUPS Printing System Flaws Could Allow Remote Command Execution

A new set of security vulnerabilities has been disclosed in the OpenPrinting Common Unix Printing System (CUPS) on Linux systems that could permit remote command execution under certain conditions. "A remote unauthenticated attacker can silently replace existing printers' (or install new ones) IPP urls with a malicious one, resulting in arbitrary command execution (on the computer) when a print
Avatar
Read More