British government quietly sacks entire board of independent AI advisers

Avatar

The British government has quietly sacked an independent advisory board of eight experts that had once been poised to hold public sector bodies to account for how they used artificial intelligence technologies and algorithms to carry out official functions.

It comes as Prime Minister Rishi Sunak drives forward with a much-publicized commitment to make the United Kingdom a world leader in AI governance, and ahead of a global AI Safety Summit being arranged for November in Bletchley Park.

Sunak’s focus on AI governance has centered around what critics say are the more headline-grabbing existential concerns raised by entrepreneurs, rather than the current uses of the technology in Britain such as predicting welfare fraud and analyzing sexual crime convictions.

The more pragmatic issues were the focus of the Centre for Data Ethics and Innovation’s (CDEI) advisory board — which was disbanded earlier this month without a public announcement. While the board’s webpage states it has been closed, the government’s note did not update the page in a way that would have sent an email alert to those subscribed to the topic.

When it was created in 2018, the CDEI was initially touted as something that could become an independent body with the statutory ability to scrutinize the public sector’s use of algorithms and AI, but this concept appears to have fallen out of favor among ministers amidst multiple changes in government.

Instead, following the attention AI garnered in the wake of the release of ChatGPT, Number 10 has launched a Frontier AI Taskforce that has described a number of “frontier” concerns about the technology — including how an “AI system that advances towards human ability at writing software could increase cybersecurity threats,” as well as how an “AI system that becomes more capable at modelling biology could escalate biosecurity threats” — both of which fall into domains covered by existing national authorities.

The taskforce is being led by Ian Hogarth, a venture capitalist who warned in the FT magazine earlier this year of the need to “slow down the race to God-like AI.” He expressed concerns about artificial general intelligence (AGI), a hypothetically autonomous AI system with superhuman capabilities, and said “it will likely take a major misuse event — a catastrophe — to wake up the public and governments” to the risks of AI.

As part of that article, Hogarth argued that there was not much investment going into AI safety measures, although he himself had made such investments. Questioned last week by a House of Lords committee about potential financial conflicts of interest, Hogarth said he had been “divesting a load of valuable positions” due to his role on the taskforce, which has a ?100 million budget to support its work.

Hogarth’s concerns about an artificial general intelligence have been questioned by others in the sector, including Professor Neil Lawrence of the University of Cambridge — the interim chair of the CDEI advisory board, who also appeared before the Lords committee beside Hogarth. Lawrence told Recorded Future News: “I think it’s a misleading framing, because even if you accept the AGI idea, the question is: What pragmatically do you need to do about it now in terms of regulation and governance?”

Another former member of the board, who spoke to Recorded Future News on the condition of anonymity to speak freely about their experiences, said: “There’s a difference between safety in the way that the Frontier Taskforce is talking about it, and the more general views of safety and governance that others might have. They’re very focused on generative AI and longer-term national security issues that they have yet to really define. Whereas the CDEI has been focusing very much on day-to-day existing uses of data analytics and machine learning, actual tools that are being used.”

Disbanding the CDEI Advisory Board

A former senior official at the CDEI, speaking to Recorded Future News anonymously so they could discuss government matters, said that at the time it was founded “the UK had a really credible claim to say that we were, in terms of thought leadership and capacity building, ahead of just about anyone else in the world when it came to thinking around AI governance and the policy implications.”

But by the time the CDEI was on its fourth prime minister and its seventh secretary of state, the body’s purpose had become much less clear to government. “They weren’t invested in what we were doing. That was part of a wider malaise where the Office for AI was also struggling to gain any traction with the government, and it had white papers delayed and delayed and delayed,” said the senior official.

Establishing the CDEI’s independence was a particular challenge. “At our inception there was a question over whether we would be moved out of government and put on a statutory footing, or be an arm’s length body, and the assumption was that was where we were headed,” the official said. Instead, the CDEI was brought entirely within the Department for Science, Innovation and Technology earlier this year.

There has not been any political will to force public sector organizations buy-in to CDEI’s governance work. One of its most mature projects, the Algorithmic Transparency Recording Standard, was intended to “support public sector bodies providing information about the algorithmic tools they use in decision-making processes that affect members of the public.”

The CDEI advisory board member said that the standard had not been adopted extensively by central government and “wasn’t promoted in the AI White Paper,” in particular. “I was really quite surprised and disappointed by that,” they added.

Lawrence told Recorded Future News he had “strong suspicions” about the advisory board being disbanded, but said “there was no conversation with me” prior to it taking place.

The other board member said: “As an advisory board, we worked in a manner that kept minutes and was transparent. I assumed that the board was going to continue, but on short notice, around August, we were told that basically the board would be wound up and a new approach would be taken — so [in the future] when advice is needed on a particular project, a specific expert could be contacted from a pool of experts [the government was putting together.]”

Unlike the government’s pool of experts, the appointments to the advisory board were made through the standard public appointments process. “We were quite a diverse group in terms of our backgrounds and expertise. That helped give the CDEI its independence. Without that I’m not sure what is going to happen to the CDEI in the longer term.”

A spokesperson for the Department for Science, Innovation, and Technology, told Recorded Future News: “The CDEI Advisory Board was appointed on a fixed term basis and with its work evolving to keep pace with rapid developments in data and AI, we are now tapping into a broader group of expertise from across the Department beyond a formal Board structure.

“This will ensure a diverse range of opinion and insight, including from former board members, can continue to inform its work and support government’s AI and innovation priorities.”

GovernmentNewsTechnologyLeadership
Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Alexander Martin is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.

 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

British government quietly sacks entire board of independent AI advisers

Next Post

IRS, Dutch and UK experts teach Ukrainian law enforcement how to catch sanctions evaders

Related Posts

Making Sense of Operational Technology Attacks: The Past, Present, and Future

When you read reports about cyber-attacks affecting operational technology (OT), it’s easy to get caught up in the hype and assume every single one is sophisticated. But are OT environments all over the world really besieged by a constant barrage of complex cyber-attacks? Answering that would require breaking down the different types of OT cyber-attacks and then looking back on all the
Avatar
Read More