Antibot4Navalny: ‘We dig deeper to explain what disinformation is trying to do’

Avatar

Days ago a story started making the rounds on social media. It claimed that Olena Zelenska, the first lady of Ukraine, had recently purchased a $4.8 million Bugatti Tourbillon while she was visiting Paris for D-Day celebrations in June. 

An unnamed source in the story said she used American military aid money to pay for the car and the story included what it said was an invoice for the vehicle. The Bugatti dealership in Paris said it was a lie but by the time they released a statement, it was too late. The story had already gone viral.

These are the kinds of disinformation campaigns that Antibot4Navalny, an anonymous group of disinformation researchers, have been flagging since last Fall in a bid to blunt Moscow’s efforts to confuse and misinform.

The Click Here podcast spoke recently by encrypted app with one of the leaders of the group about efforts to unmask Russian bots, their work with global researchers on disinformation, and why some people are saying Antibot4Navalny is punching way above its weight as it takes on the Kremlin. The interview has been edited for clarity and length.

IMAGE: MEGAN J. GOFF

CLICK HERE: What’s the best way to describe Antibot4Navalny?

ANTIBOT: Most people describe us as an anonymous group of analysts tracking Russia-related influence operations on X, formerly Twitter. We’ve been in operations since November 2023, but I personally have been researching Russian disinformation since March 2018. 

CH: What makes you different from other anti-disinformation groups?

AB: In a nutshell, we don’t focus on exposing or debunking fake narratives individually — in order to avoid getting on the wrong side of the Brandolini law. You can’t take aim at individual stories and be effective. That’s why we chose to expose the channels that are pushing these stories… and dig deeper to explain what the disinformation is trying to do – its underlying agenda – on a regular, systematic basis. 

CH: How many of you are doing this? 

AB: We’re a small group. I’m the only one working full time on this. We also count on what I would call enthusiasts who contribute their research on a regular basis. And then in addition to that, we have dozens of loyal followers who give us specialized help when we need it.

CH: And what made you go from disinformation researcher to leading the organization?

AB: Before Oct 2023, when we really began in earnest as a group, there hadn’t been an occasion to research how Russian influence campaigns were targeting other countries. Our key focus at the time was looking at disinformation targeting Russia and Ukraine. And those were campaigns driven by troll farms, paid humans. 

Then in late October of last year, we uncovered a massive bot campaign. Bots [computer software] were posting and reposting a highly produced Russian-language video that was clearly aimed at changing the narrative of the war in Ukraine. 

It was saying two things at the same time: one, that Russia and Ukraine were brothers, and two that the fighting was essentially breaking up a family. We assumed that it was targeting Russian and Ukrainian audiences. 

But a short time later, we could see that the very same bots had widened the aperture and had started to target France, Germany, the U.S., Israel and Ukraine all at the very same time. They started promoting fake articles that were meant to convince people to stop sending Western aid to Ukraine.

This seemed to present an opportunity to use all our experience tracking internal Russian information campaigns and help Western audiences know what to expect.

CH: Antibot4Navalny has been tracking Doppelgänger, one of these Russian disinformation groups, can you talk about them a little bit?

AB: Doppelgänger itself started operation in the mid 2022. And then back in October, when we saw these viral posts on X claiming Ukraine’s defeat was imminent, we began to look into it. The articles were being shared on fake websites that looked like well-known news outlets in the West. 

We identified the bots behind the campaign, found some unique photos that had not previously been published and we made it all public. That helped us connect to media outlets like Le Monde, Liberation and other researchers working on the Doppelgänger problem and began getting in touch with us.

We discovered all kinds of funny details about the campaign like the way they developed these accounts. They were alphabetic. All the U.S. associated bots started with D names, French ones used names that began with J, German ones started with R. 

CH: What does a typical day look like for you? 

AB: 80% of my time is promoting the work we do. I compile new findings, pitch stories to media outlets, and post detailed X threads for our followers. The other 20% of my time is spent on what I think I do best: find patterns, analyze content, and automate our day-to-day routine.

However, for the past several months, 0% of my time has been spent on what I think I do best: expose new bot and troll crowds and build automated detectors.

The team spends most of their time collecting data on nightly runs of bots. They would most benefit from automation, but we can not afford it yet. 

CH: How do you expose bots and trolls? Is technology changing the way you do it?

AB: Overall, there are two streams of work: exposing a new “crowd” of bots and following the new accounts joining it to analyze trends, narratives and priorities. We focus on finding a few “species” that we suspect are inauthentic in some way and then we find what’s common between them. Then we gather sufficient evidence to prove that the accounts are inauthentic and let the world know. 

Because we track and record the content they promote and/or the topics they comment on, we get a lot of coverage. 

To try to make this work at scale, machine learning used to help dramatically, until Twitter discontinued free access to their Application Programming Interface (API). We are still struggling to recover.

What’s important to understand is that the point isn’t really just to look at what bots are writing about or what their specific talking points are. What they are trying to accomplish is more subtle than that. Bots are about introducing uncertainty and confusion — to undermine not a particular story, but news more generally, to disrupt the conversation itself. That’s why they bring in as many talking points and perspectives as possible, even if they are contradicting each other. It adds to the confusion.

CH: How have disinformation groups, like Doppelgänger, transformed over the past few years?

AB: Doppelgänger and other influence operators are constantly experimenting in order to work around social media abuse protection measures (and X is struggling to catch up with those changes); X is becoming increasingly less transparent and accessible for researchers; and Doppelgänger seems to be learning from its own mistakes.

For example, the recurring pattern is: a few citizens of a third country are hired to do something on the ground that favors the Kremlin’s interests or agenda; a few days later, Doppelgänger bots are focusing on massively promoting it. It might be taking aim at an official or to chip away at support for Ukraine or some other targeted country.

Now, it seems like Doppelgänger is learning from its own experience when covering on-the-ground influence operations.

Last fall, Doppelgänger bots promoted unique photos of Stars of David in Paris that were never published before. That showed very strong evidence of connection between Doppelgänger operators and people behind the offline operation.Their bots promoted a publication by Doppelgänger’s original site (artichoc[.]io), which used a broadly circulated photo of red handprints at a memorial by AFP — which helped with “plausible deniability.” Bots promoted publication by Le Figaro, a legitimate, reputable media outlet — which made the tweets posted by the bots look more authentic.

CH: What have people gotten wrong about bots and their operations?

AB: The most common misconception is that the key goal of bots is to promote a specific set of talking points to make an audience believe something specific.

In reality, the biggest achievement of influence operations based on trolls-for-hire is, in our opinion, that regular users suspect each other to be pro-Russian, pro-China, pro-Iran, what have you. Once they encounter someone from an opposing point of view, they prefer to stop the conversation altogether. In a sense, the Godwin law is not there any more. It was replaced with “you’re a troll-for-hire.”

The biggest achievement of FIMI (Foreign Information Manipulation and Interference), as well as of domestic troll farms in Russia, is that it ruined the benefit of doubt. Regular users stopped trusting each other, especially with those holding views different from theirs. Polarization and atomization improved; it became increasingly difficult to seek tactical allies for the sake of common goal among those bearing differing views. It’s “divide and conquer” at its best.

CH: How do we fix it?

AB: There are some options to explore: Make user-generated data of social media companies as widely and freely available to researchers as possible; stimulate third-party developers to build an ecosystem of third-party analysis tools and libraries; social networks providing users multiple tools helping to analyze the accounts they never encountered before.

CH: What are your proudest achievements?

AB: There are several. Among some of them, we exposed Matryoshka, a completely new influence operation that was never researched before us. Following our initial exposure, it was further researched by other organizations. 

We also collected what we believe is a top-3 largest dataset on Doppelgänger bot activity that can be made available for journalists for analysis and reporting. We collected over 3,500 articles that were promoted by social media bots on X, along with every relevant evidence out there.

CH: What do you make of all the media interest in the work you’ve done?

AB: We were surprised to see how incredibly interested the media is in Russian disinformation influence campaigns. In just over six months, we were quoted in about 60 stories by non-Russian media in relation to the Russian state’s FIMI alone.

At the same time, it turned out that most media outlets are not used to being paying customers for researchers; they typically trade exposure to researchers for viral stories from them, unlike photo agencies, stringers or paparazzi.

Clarification: This article has been updated to reflect the emergence of Doppelgänger in information operations in 2022, not 2017 and clarifies that Antibot4Navalny were not the first researchers to identify the group.

CybercrimeInterviewsNewsPeoplePodcast
Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Dina Temple-Raston

is the Host and Managing Editor of the Click Here podcast as well as a senior correspondent at Recorded Future News. She previously served on NPR’s Investigations team focusing on breaking news stories and national security, technology, and social justice and hosted and created the award-winning Audible Podcast “What Were You Thinking.”

Sean Powers

is a Senior Supervising Producer for the Click Here podcast. He came to the Recorded Future News from the Scripps Washington Bureau, where he was the lead producer of “Verified,” an investigative podcast. Previously, he was in charge of podcasting at Georgia Public Broadcasting in Atlanta, where he helped launch and produced about a dozen shows.

Jade Abdul-Malik

is a producer for the Click Here podcast. She has worked on podcasts with Gimlet Media and Sony Music Entertainment and was a reporter for Georgia Public Broadcasting in Atlanta.

 

Total
0
Shares
Previous Post

Ticketmaster discredits dark web claims of stolen barcodes for Taylor Swift concerts

Next Post

South Africa national lab says ransomware recovery to last until mid-July

Related Posts

Mozilla Faces Privacy Complaint for Enabling Tracking in Firefox Without User Consent

Vienna-based privacy non-profit noyb (short for None Of Your Business) has filed a complaint with the Austrian data protection authority (DPA) against Firefox maker Mozilla for enabling a new feature called Privacy Preserving Attribution (PPA) without explicitly seeking users' consent. "Contrary to its reassuring name, this technology allows Firefox to track user behavior on websites," noyb said
Avatar
Read More

AI-Powered Rhadamanthys Stealer Targets Crypto Wallets with Image Recognition

The threat actors behind the Rhadamanthys information stealer have added new advanced features to the malware, including using artificial intelligence (AI) for optical character recognition (OCR) as part of what's called "Seed Phrase Image Recognition." "This allows Rhadamanthys to extract cryptocurrency wallet seed phrases from images, making it a highly potent threat for anyone dealing in
Avatar
Read More