OpenAI disrupts 20 campaigns to misuse its tech as federal officials mull international use of AI

Avatar

OpenAI said it has disrupted more than 20 operations this year by nation-states and affiliates to abuse its technology and use it for a range of malicious activity. 

The AI giant published a 54-page report Wednesday detailing efforts by actors from China, Iran, Russia, Israel and other countries to do everything from writing more sophisticated malware code to rewriting phishing emails and Twitter posts. 

One prominent example involved an Iranian group named CyberAv3ngers that caused alarm last year after carrying out several attacks on U.S. water facilities. The group targeted a poorly-secured industrial technology tool created by a company based in Israel. 

OpenAI researchers Ben Nimmo and Michael Flossman said the company banned accounts connected to CyberAv3ngers, which U.S. officials have tied to Iran’s Islamic Revolutionary Guard Corps.

“Much of the behavior observed on ChatGPT consisted of reconnaissance activity, asking our models for information about various known companies or services and vulnerabilities that an attacker would have historically retrieved via a search engine. We also observed these actors using the model to help debug code,” the researchers said. 

“The tasks the CyberAv3ngers asked our models in some cases focused on asking for default username and password combinations for various programmable logic controllers (PLCs). In some cases, the details of these requests suggested an interest in, or targeting of, Jordan and Central Europe.”

U.S. law enforcement agencies previously said the group was able to break into U.S. water systems through default username-password combinations. 

CyberAv3ngers also used ChatGPT to ask “high-level questions about how to obfuscate malicious code, how to use various security tools often associated with post-compromise activity, and for information on both recently disclosed and older vulnerabilities from a range of products,” according to OpenAI.

OpenAI claimed CyberAv3ngers’ use of ChatGPT did not provide them “with any novel capability, resource, or information, and only offered limited, incremental capabilities that are already achievable with publicly available, non-AI powered tools.”

They did not respond to requests for comment about how they determined that. The hackers forced the Municipal Water Authority of Aliquippa in Pennsylvania to take systems offline and switch to manual operations in order to remove any risk to the municipality’s water or water supply.

Several other utilities shared images of PLCs taken over by CyberAv3ngers, with messages left by the hackers saying “You have been hacked, down with Israel. Every equipment ‘made in Israel’ is CyberAv3ngers legal target.”

The Cybersecurity and Infrastructure Security Agency (CISA) worked to identify water utility operators using devices from Unitronics throughout the fall and notified them of the campaign — urging them to change the default passwords set on the devices. 

OpenAI said organizations in dozens of other countries, including Iran and Israel, used ChatGPT for operations against rivals — using the platform to generate social media posts meant for misinformation, writing fake articles for websites and more. 

CISA Chief AI Officer Lisa Einstein speaking at Recorded Future’s Predict conference Wednesday.

The report came on the same day several senior U.S. officials discussed the global implications of artificial intelligence from a cybersecurity perspective. 

Lisa Einstein, chief AI officer at CISA, told the audience at Recorded Future’s Predict cybersecurity conference on Wednesday that she just came back from an AI-focused tabletop exercise run by the Joint Cyber Defense Collaborative (JCDC), a threat information sharing organization with members from the private sector and government agencies.

“AI companies are part of the IT sector that’s part of critical infrastructure, and they need to understand how they can share information with CISA and with each other in the wake of possible AI incidents or threats,” she said. 

“What we hope is that that community will be able to keep building this muscle memory of collaboration, because a terrible time to make new collaborations is during a crisis. We need to have these strong relationships increase trust ahead of whatever crisis might happen.”

Einstein said she is concerned the rush to create AI products has made security an afterthought, replicating past mistakes that were made with the introduction of technology, the internet and social media.  

Einstein warned that we may be “rapidly complexifying the threat landscape in a way that doesn’t actually match the benefit that [AI is] providing.”

Jennifer Bachus, principal deputy assistant secretary at the U.S. State Department, added that the U.S. is trying to lean into the idea that AI can help other countries create jobs and prosperity while solving some sustainable development goals. 

She echoed Einstein in raising concerns about AI’s use, noting that the U.S. has tried to push other countries to think about the issues of bias and discrimination when regulating the technology. 

But she was frank about the way U.S. adversaries are painting efforts to regulate AI. 

“You also have to be realistic about the playbook that our adversaries are going to use, which is to say — and they did this with cyber — ‘the United States or the Western world is just trying to create these guardrails to keep you from having access. This is all about keeping it all to themselves,’” she said. 

The U.S. tried to get around this by organizing an AI For Good event on the sidelines of the U.N General Assembly last month with the goal of creating a coalition of countries to adopt the U.S. vision for AI, Bachus said. 

“Because I think if I have any big concern about AI, it’s the surveillance state portion of it,” she explained.

“The more we can convince countries that that is not the way to use AI, that in fact, you need to create an environment where people can thrive, people can have privacy, people can feel respected by their government, and that the government can essentially develop better services for their people. I think that’s the vision of AI I want to have.”

CybercrimeIndustryGovernmentNewsNation-stateTechnology
Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Jonathan Greig

is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.

 

Total
0
Shares
Previous Post

OpenAI Blocks 20 Global Malicious Campaigns Using AI for Cybercrime and Disinformation

Next Post

Mozilla fixes critical Firefox bug exploited in the wild

Related Posts