Russian Disinformation: AI Deepfakes Target Olympics & Ukraine Support

by Liam O'Connor

Milan, Italy – As the 2026 Winter Olympics get underway, a sophisticated Russian disinformation campaign is actively targeting the Games and undermining support for Ukraine. Dubbed “Matryoshka,” the operation utilizes artificial intelligence to create and spread false narratives, including deepfake videos featuring fabricated statements from prominent figures. The effort builds on similar tactics employed during the 2024 Paris Summer Olympics, known as “Operation Overload,” demonstrating a continued strategic effort by Russia to exploit international events.

The core of the Matryoshka campaign lies in its innovative employ of AI voice cloning. Analysts are particularly concerned by the network’s ability to convincingly impersonate trusted voices, adding a layer of authenticity to the disinformation and increasing its potential to deceive audiences. This Russian disinformation effort represents a significant escalation in the use of AI in information warfare, raising concerns about the integrity of international events and the public’s ability to discern truth from falsehood.

AI-Generated Deception: How Matryoshka Works

According to Pablo Maristany de las Casas from the Institute for Strategic Dialogue (ISD) think tank, “What truly sets Matryoshka apart is the use of AI voiceovers to impersonate the voices of trusted figures.” The technique involves taking genuine video footage of individuals and seamlessly replacing portions with stock footage overlaid with a deepfake narration that mimics the original speaker’s voice. This allows the disinformation network to insert fabricated statements that appear authentic, potentially swaying public opinion and damaging reputations.

Darren Linvill, a media forensics expert at Clemson University, explains the process further: “They take a real video of a real person but part-way through they switch to stock footage overlaid with a deepfake narration that sounds just like the real person so that they can insert absurd lies that appear more authentic.” This method makes it increasingly difficult for viewers to identify the manipulation, as the visual and auditory cues align, creating a convincing illusion.

Targeting the Olympics and Ukrainian Athletes

The disinformation campaign has specifically targeted Ukrainian athletes participating in the Winter Olympics. Fabricated claims include allegations of inappropriate behavior, doping violations, and attempts to portray them as aggressive or politically motivated. One example highlighted by reports involves Ukrainian skeleton racer Vladyslav Heraskevych, who faced a ban after displaying images of athletes killed in the war on his helmet. The Matryoshka network falsely claimed his brother was involved in recruiting soldiers and fabricated a story about a Hungarian athlete expressing anti-Ukrainian sentiment.

A particularly concerning example involves the manipulation of a press conference featuring International Olympic Committee President Kirsty Coventry. BBC Verify reported that a video initially showed Coventry speaking at a Euronews press conference, but quickly transitioned to an AI-generated version of her voice claiming she was “shocked” by the Ukrainian team’s presence in Milan, accusing them of using the Games for “crazy political PR” and describing them as “irritating.” Footage from the original press conference confirmed that Coventry never made these statements.

The network has also targeted media outlets, creating deepfakes of American commentators and, as reported by the Canadian Broadcasting Corporation (CBC), a Canadian journalist. These efforts demonstrate a broad attempt to discredit reporting on the conflict in Ukraine and sow distrust in established media sources.

Beyond the Olympics: A Pattern of Disinformation

This isn’t the first time this network has been identified. BBC Verify previously investigated how the same operation cloned the voice of a British 999 call handler using AI last year. This suggests a sustained and evolving effort to spread disinformation using advanced technology. Maristany de las Casas of the ISD emphasized that the operators of Matryoshka understand that their content is more credible when delivered by a seemingly trusted person.

Even as the individual reach of these fake videos has been limited so far, the combined effect reveals a deliberate strategy to undermine support for Ukraine and disrupt international events. The use of AI-generated content makes it more challenging to detect and counter these efforts, requiring increased vigilance from media organizations, social media platforms, and the public.

The ongoing investigation into Matryoshka highlights the growing threat of AI-powered disinformation and the need for robust countermeasures to protect the integrity of information ecosystems. As the Winter Olympics continue, authorities and fact-checkers will remain on high alert for further attempts to manipulate public opinion and undermine the spirit of the Games.

The situation remains fluid, and further updates are expected as the investigation unfolds. The Institute for Strategic Dialogue and BBC Verify continue to monitor the network’s activity and provide analysis of its tactics. Readers can find more information and updates on the BBC Verify website and the ISD’s publications.

What do you think about the increasing use of AI in disinformation campaigns? Share your thoughts in the comments below, and please share this article to help raise awareness about this important issue.

You may also like

Leave a Comment