Rep. Yvette Clarke on AI-fueled disinformation: ‘We have not protected ourselves in time for this election cycle’

Avatar

With election day around the corner, Rep. Yvette Clarke of New York’s 9th congressional district says artificial intelligence and deepfakes are being used to spread disinformation on a massive scale — and inaction from social media platforms and lawmakers has made the situation worse.

“Not enough Americans are knowledgeable about the fact that, unfortunately, our ecosystem is the Wild West,” said Clarke, who serves as vice chair of the House Energy and Commerce Committee, which has a Subcommittee on Communications and Technology, as well as the House Homeland Security Committee. “You can disrupt an election, you can disrupt voting. There are a whole host of ways in which this technology can be used to subvert the democratic process.”

Recorded Future News in September sat down with Clarke, who has sponsored dozens of cybersecurity-focused bills, about the recent nation-state attacks on both presidential campaigns, the rise of AI-fueled disinformation and ransomware attacks on local governments.

This conversation has been lightly edited for length and clarity.

Recorded Future News: There are multiple cybersecurity-focused bills in Congress. In your view, which are most likely to successfully make their way into law, and which are the ones you think are most important?

Rep. Yvette Clarke: I believe that the information sharing legislation that was passed and is now law that gives CISA [the Cybersecurity and Infrastructure Security Agency] the authority to do the work that they must do to prevent or to work with entities that have been hit by a ransomware attack is critical. 

That’s the baseline for understanding the movement of bad actors, and probably the best defense we have in terms of being able to prevent future attacks. But the Biden-Harris administration has a 10-year cybersecurity roadmap that offers support and increase in regulation for critical industries in particular to adopt basically cybersecurity practices that build on establishing cybersecurity requirements. 

Harmful attacks, including the 2021 Russian-linked attack on the Colonial Pipeline… the administration really has been very focused on their notice of the reckless behavior of our adversaries — China, Russia, North Korea. So I shared the administration’s commitment to initiate a much needed shift in liability onto the company to build better security. 

The Colonial Pipeline scenario could have been prevented, because we’re just talking about cyber hygiene. It was a circumstance where an old password was not deleted, and our adversaries were able to crack the code. And so it’s really about cyber hygiene, in many cases, in order to prevent this going forward.

I’ve passed legislation for state, local and tribal authorities to be able to build up a robust cybersecurity posture because we saw that’s sort of the next layer. Can we get into your police department? Can we get into your hospital? Can we get into your school district? And so there’s a lot of work to be done, given the vast nature of our connectivity through the internet. It provides a bad actors a wide canvas from which to hit

RFN: Since the Change Healthcare ransomware attack there has been significant debate over whether the healthcare industry needs laws around minimum cybersecurity standards. Hospitals said the blame lies on technology manufacturers, but do you think the entire healthcare sector needs specific cybersecurity rules? (Editor’s note: since this interview, two senators introduced a cybersecurity bill focused on the healthcare sector.)

YC: I think there needs to be mandatory requirements in every industry across the board, because so much of our personal information is in that domain. We’re putting a lot of trust in the entity that we will rely on to provide a service, to educate us and to inform us that the breach of security could have long-lasting impacts on civil society, if folks are reluctant or resistant to standing up a strong cybersecurity posture. 

I really believe there has to be a baseline of understanding vulnerability in this space and how to mitigate those vulnerabilities. And if we need to work with local jurisdictions — because it’s a matter of finance, it’s a matter of workforce — then that’s what we need to do. We have already crossed the Rubicon into the technological age. And this, like any other sort of real-world experience, requires a certain level of vigilance and security to protect the American people.

RFN: Does Congress have a role in regulating new technology like AI and quantum computers in terms of how it may affect cybersecurity?

YC: Absolutely. There is no doubt, with these new tools, that we’re a bit behind in terms of modernizing our cybersecurity posture. There are basic fundamentals, like cyber hygiene, that can help to build a robust cybersecurity posture. 

But once we get into the age of quantum computing — we are already into the age of artificial intelligence — being attacked will happen at lightning speed. The fact that an entity may have been attacked in prior instances doesn’t mean that they will not be targeted in the future with more sophisticated, more rapid types of attacks if those vulnerabilities are perceived

RFN: Most ransomware gangs are based in Russia or other countries that will not extradite those behind cyberattacks. What other measures can the U.S. government take to hold these threat actors accountable?

YC: The age of the technological era has been growing exponentially over the past few years. We have seen ransomware attacks grow exponentially alongside that, and one of the things that is critical to this is, quite frankly, getting the trust of our entities, whether they’re in the private sector or public sector, to notify the Department of Homeland Security as soon as they recognize that they have been hit. 

One of the things we have been concerned about is any delay in letting us know, letting CISA know, in particular, about such incidents. That helps in terms of attribution and what then it’s possible for us to do is document the ways in which our networks are penetrated, attribute it to the bad actors — whether they’re overseas or domestic — and then create a plan of action to hold those entities accountable, but also so that we share that information with other organizations in the private sector or public sector, so that they can fortify their network against such types of penetration. 

The FBI, of course, is a part of the work that we do in Homeland Security, and being able to sort of go after the bad actors, sanction — whether it’s a nation-state or an individual for the crime — is going to be paramount.

RFN: In your view, should the U.S. ban ransom payments to ransomware gangs?

YC: We always as a nation have spoken about the fact that — whether it’s a physical kidnapping or a nation-state that captures a U.S. citizen and holds them — it has been a long-standing tradition not to pay ransoms. 

However, we know, particularly when it comes to the private sector, their main concern is their customers or the service that they provide, because oftentimes the ransom hit is accompanied by a denial of service of some sort, and so that sort of urgency around providing that service becomes a challenge. 

And we have had circumstances where the ransom is paid and that unfortunately puts that at severe disadvantage in terms of being able to trace the actions of the bad actor and/or learn from the ransomware attack in a way to prevent it in the future. It is really based on the necessity of the entity. But I know that it has been a long-standing tradition not to pay these ransom. 

RFN: Both the Trump and Harris campaigns have been targeted by hackers. In your view will this be a permanent concern for political campaigns going forward, and what can the U.S. government do to better protect campaign infrastructure?

YC: One of the pieces of legislation that I introduced very early on during this election cycle, recognizing where we are in terms of the artificial intelligence posture in the United States, was my Real Political Ad Act. I had done legislation around deepfake technology years ago, recognizing how it can be weaponized to deceive the American people. 

We have yet to really stand up a robust requirement to identify deepfakes, to disclaim it, to have a real identification in real time. That’s scary because AI and the deployment of deepfake technology is vast and very, very quick. 

You can disrupt an election, you can disrupt voting. There are a whole host of ways in which this technology can be used to subvert the democratic process, and we have not protected ourselves in time for this election cycle. 

I think the education of the public has begun to penetrate, where people are questioning what they see, but not enough Americans are. And not enough Americans are knowledgeable about the fact that, unfortunately, our ecosystem is the Wild West and within that is false information, deceptive information, whether it’s visual or audio, that they should be questioning and fact checking in order to make sure that that deception does not create a situation where people are deceived into acting in a way in which it will undermine our elections. 

RFN: In light of recent controversies where misinformation and disinformation are spread rapidly across social media, what can be done to limit how fake information is amplified both by local actors and foreign entities?

YC: We’re still, unfortunately, in those discussions right now. One of the things that I thought was simple enough was for our partners in the social media space, our social media platforms and companies, to require disclaimers. 

If any content has been manipulated and distorted in any way, they should make sure that there’s either a watermark that makes it possible for the public to know that this has been manipulated content, or a disclaimer that says that the information that is being received is either for entertainment purposes or it’s fake news. 

There has to be a role for the platform. They are publishers and under any normal circumstance, whether it’s broadcast or other communication vehicles that we have in the United States, they are required to place those disclaimers out there under the rules of the FCC [Federal Communications Commission]. 

Unfortunately, that has not been extended to social media platforms, and I think that it’s time, it’s definitely time. I think there are far more Americans that have adopted sort of virtual lives than maybe relying on our traditional cable broadcast for information, and that’s why the role of these platforms has to evolve. The responsibility for making sure that their customers are not deceived has to be put into statutes, whether it’s through rule and regulation or through law.

RFN: Are there other cybercrime related issues you think are important right now?

YC: All things AI. Quantum computing, we’re working on it. It hasn’t been deployed as of yet and it may be months or years away. However, we are in the age of AI, and we cannot leave the American people vulnerable. When you look at the steps that have been taken in the European Union and in other nations to protect their nationals, the U.S. has unfortunately left our society vulnerable. 

There are a number of steps that are being taken on the Hill right now. They’re moving a bit slow for me. But whatever it takes to get us in the right posture, I’m all in for it. 

I serve on the House AI Task Force that was commissioned by Speaker Mike Johnson (R-LA) and Leader Hakeem Jeffries (D-NY), as well as the AI Working Group for the Congressional Black Caucus. We’re getting as much feedback from industry leaders, from academics, from scientists, so that we have a full and robust understanding of where AI is going and what we can do to regulate it while at the same time encouraging innovations.

CybercrimeElectionsGovernmentInterviewsLeadershipNewsPeople
Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Jonathan Greig

is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.

 

Total
0
Shares
Previous Post

Russia behind latest election disinformation video, US intel agencies say

Next Post

German Police Disrupt DDoS-for-Hire Platform dstat[.]cc; Suspects Arrested

Related Posts

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the
Avatar
Read More

⚡ THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips

The online world never takes a break, and this week shows why. From ransomware creators being caught to hackers backed by governments trying new tricks, the message is clear: cybercriminals are always changing how they attack, and we need to keep up. Hackers are using everyday tools in harmful ways, hiding spyware in trusted apps, and finding new ways to take advantage of old security gaps.
Avatar
Read More