AI systems ‘subject to new types of vulnerabilities,’ British and US cyber agencies warn

Avatar

British and U.S. cybersecurity authorities published guidance on Monday about how to develop artificial intelligence systems in a way that will minimize the risks they face from mischief-makers through to state-sponsored hackers.

“AI systems are subject to new types of vulnerabilities,” the 20-page document warns — specifically referring to machine-learning tools. The new guidelines have been agreed upon by 18 countries, including the members of the G7, a group that does not include China or Russia.

The guidance classifies these vulnerabilities within three categories: those “affecting the model’s classification or regression performance”; those “allowing users to perform unauthorized actions”; and those involving users “extracting sensitive model information.”

The document sets out practical steps to “design, develop, deploy and operate” AI systems while minimizing the cybersecurity risk.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said Lindy Cameron, chief executive of the U.K.’s National Cyber Security Centre (NCSC).

Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), described the release of the guidelines as “a key milestone in our collective commitment — by governments across the world — to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

The NCSC in August warned about “prompt injection attacks” as an apparently fundamental security flaw affecting large language models (LLMs) — the type of machine learning used by ChatGPT to conduct human-like conversations.

“Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction,” the agency’s previous paper stated.

The new guidance focuses on addressing potential cybersecurity vulnerabilities arising directly from the use and integration of AI tools with other systems, rather than their misuse by bad actors.

Monday’s guidance sets out how developers can secure their systems by considering the cybersecurity risks specific to the technologies that make up AI, including by providing effective guardrails around the outputs these models generate.

Composed on the heels of the AI Safety Summit, the guidance was developed with input from the NCSC and CISA’s sister agencies in 17 other countries — from New Zealand to Norway and Nigeria — as well as over a dozen organizations currently developing the technology, including Microsoft, Google and OpenAI.

The NCSC wrote in a press release that “agencies from 17 other countries have confirmed they will endorse and co-seal the new guidelines” as a “testament to the UK’s leadership in AI safety.”

Jonathan Berry, the Viscount Camrose — an aristocrat who inherited his seat in Britain’s unelected House of Lords before being appointed as the Minister for AI and Intellectual Property by Prime Minister Rishi Sunak — described the guidance as “only the start of the journey to secure AI” during a launch event at NCSC’s headquarters on Monday.

Berry said the British government did not immediately plan to legislate to improve AI security. He said the Department for Science, Innovation and Technology (DSIT) was currently developing a “voluntary code of practice” regarding AI development that would subsequently be scrutinized by a public consultation, with the hope of one day establishing an international standard.

NewsGovernmentTechnology
Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Alexander Martin is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.

 

Total
0
Shares
Previous Post

Sacked Ukrainian cyber chief released on bail amid corruption probe

Next Post

Suspected Hamas-linked hackers target Israel with new version of SysJoker malware

Related Posts

New Cryptojacking Attack Targets Docker API to Create Malicious Swarm Botnet

Cybersecurity researchers have uncovered a new cryptojacking campaign targeting the Docker Engine API with the goal of co-opting the instances to join a malicious Docker Swarm controlled by the threat actor. This enabled the attackers to "use Docker Swarm's orchestration features for command-and-control (C2) purposes," Datadog researchers Matt Muir and Andy Giron said in an analysis. The attacks
Avatar
Read More