Until recently, generative technologies were feared for their capacity to create convincing falsehoods — generating faces that don’t exist, speeches never delivered, or entirely fictional events. But a more subtle and equally consequential use is emerging: Artificial Intelligence (AI) is now being invoked not only to create deception, but to discredit truth.
As AI becomes more widespread and more performant, public figures and institutions have begun to exploit the mere possibility of AI manipulation to dismiss authentic content. A compromising video can now be waved off with a simple claim: “It’s a deepfake.” In this new dynamic, it is no longer the fake imitating the real, but it is the real being called fake, and while much attention has been paid to the proactive use of AI to create false realities, far less scrutiny has been given to its reactive use: as a tool to deflect inconvenient truths and undermine trust in genuine evidence.
Key Takes
- AI is increasingly used not just to create fake content, but to discredit real footage — turning doubt into a political defense.
- During President Macron’s May 2025 visit to Hanoi, a brief personal moment captured on video was initially dismissed by officials as potentially AI-generated, before being confirmed as authentic — highlighting the political reflex to invoke digital manipulation.
- The tactic of casting doubt on authentic footage is now observable across democratic and authoritarian contexts, from France and the United States to Russia, China, Türkiye, Egypt, Iran, and the Middle East.
- In 2019, a slowed-down video of Nancy Pelosi prompted claims of deepfakery, despite involving no AI — an early example of the “It’s a deepfake” defense in democratic politics.
- Russia has repeatedly denied documented war crimes in Ukraine by alleging Western fabrications, including in the case of the Bucha massacre.
- Both Israeli and Palestinian sources have used similar arguments to reject verified footage showing violations of international humanitarian law.
- Authoritarian regimes have long contested citizen-filmed protest videos as “staged” or “manipulated,” a strategy now evolving with the rise of AI.
- This growing culture of strategic doubt undermines the credibility of all visual evidence, making truth negotiable and destabilizing public trust.
- The shift from “Is it true?” to “Can we prove it’s not fake?” has far-reaching consequences in justice, journalism, diplomacy, and democratic governance.
- Even in liberal democracies, invoking AI as a rhetorical shield reflects a broader erosion of accountability — where facts are no longer debated, but simply denied.
A Video, a Slap, and the Politics of Doubt
On May 25, 2025, French President Emmanuel Macron arrived in Hanoi, Vietnam, for an official visit focused on economic cooperation and regional diplomacy. As he disembarked from the presidential aircraft alongside his wife, Brigitte Macron, cameras captured a brief interaction between the couple: from inside the doorway of the plane, Brigitte Macron placed both hands on her husband’s face in a gesture that drew varied interpretations. The president then turned to wave and began descending the aircraft stairs. His wife, walking slightly behind, declined his extended arm and instead held the railing. The footage quickly circulated on social media and was widely shared by users, with some suggesting the gesture was a slap, others framed it as a moment of marital tension.
Initial reactions from sources close to the Élysée initially questioned the authenticity of the video, suggesting the possibility of digital manipulation — including AI-generated alterations, such as the use of deepfake or video creation software —, and some reports even mentioned potential foreign interference.
However, these claims were later withdrawn when the video was confirmed to be authentic; Macron addressed the incident directly while speaking to journalists during his Vietnam visit, dismissing the interpretation of the exchange as exaggerated. He described it as a playful moment between spouses, referencing it alongside other recent viral clips involving him that had sparked misinformation. “All three videos are real,” he said, “but none of the interpretations are.
A Post-Truth World
Though minor in substance, the episode illustrates an emerging pattern: the invocation of AI as a tool to discredit authentic content. While the focus of public discourses has largely been focused on AI’s capacity to fabricate false content, less attention has been paid to its use as a rhetorical shield — a way to deny the veracity of real events.
This strategy of doubt has surfaced in several recent cases, all over the world. In 2019, a video of former Speaker of the United States House of Representatives Nancy Pelosi appeared to show her speaking slowly and incoherently. Some said she was drunk — others claimed she was the target of a deepfake. In reality, the video had simply been slowed down, with no AI involved.
Elsewhere, the pattern is even more explicit — and carries darker implications. Since 2022, numerous videos showing atrocities allegedly committed by Russian soldiers in Ukraine have circulated, including executions of prisoners and strikes on civilians. On numerous occasions, the Kremlin has denied the authenticity of these videos, claiming they were “fabrications” or “Western provocations.”
For instance, the Russian Ministry of Defense declared, regarding the Bucha Massacre:
“These are fake videos, edited by Ukrainian or
Western services to smear the Russian army.”
In the Middle East, both Israeli and Palestinian sources have used the same technique to cast doubt on verified footage showing infractions of the International Humanitarian Law, including bombings and civilian casualties.
It is worth noting that this represents an evolution of an already established strategy: in the digital age, where filming is made easy by cameras and smartphones, countries such as Türkiye, Egypt, and Iran have long accused citizen-filmed protest videos of being “staged,” “manipulated,” or “taken out of context,” even without invoking the use of AI.
Across these varied contexts, invoking the specter of AI serves as a means not only to contest the authenticity of individual clips, but to undermine the credibility of visual evidence itself.
Analysis and Future Outlook
What’s striking in the examples above is that doubt around the footage becomes an official strategy. Facts are no longer countered with investigation, but by insinuating that they may never have occurred at all.
The danger here isn’t just technological — it’s epistemological: if anything can be fake, then nothing is indisputable. Reality becomes one version among others — a narrative to be managed according to one’s interests. It’s no longer just lies being spread, but truth becoming reversible. We are entering an era of the “plausibly denied,” where even filmed evidence can be perpetually questioned.
This shift in debate — from “Is it true?” to “Can we prove it’s not fake?” — carries major consequences, in courts, in the media, in international relations, and in everyday life. This systematic skepticism can erode collective trust and make any attempt at truth seem suspicious.
This isn’t to say that every claim of deepfake manipulation is made in bad faith. The threat of falsification is real and caution is warranted, especially with AI’s progress in terms of video (and now sound) generation. But what’s new is that suspicion is becoming a defensive weapon — a way to buy time, or muddy the waters.
Politically, the implications of this culture of strategic doubt are also concerning: not only does it provide authoritarian states and leaders with a new way to evade accountability, but it is also employed in democratic contexts, as illustrated by President Macron’s recent visit to Hanoi.
At a time when the liberal democratic model is being challenged — with pressure further intensified by fake news and foreign interference — it is important not to overlook the role that national political actors themselves play in this dynamic. While officials frequently denounce AI for blurring the boundaries between truth and falsehood, representatives of liberal democracies, who are supposedly expected to uphold transparency and objectivity, often engage in the same tactics, thereby contributing to the erosion of trust in what is real.
Conclusion: The Issue May No Longer Be the Fake, But the Disappearance of the Real
The idea that “anything can be fake” has become a new form of narrative control. The goal is no longer just to block the spread of images — but to undermine their credibility before they even circulate. Whether real or imagined, AI becomes the perfect scapegoat for erasing the real without having to censor it outright.
The Hanoi incident, however trivial it may seem, crystallizes this deep phenomenon: truth is becoming negotiable. AI doesn’t replace reality — it makes it contestable. In a world saturated with images and competing narratives, the central question is no longer “What happened?” but “Who do you believe?” Democratic figures do not help this trend going into the right direction: if trust disappears, then nothing needs to be true or false anymore — only plausible or deniable.
By casting doubt on real images, public figures undermine the legitimacy of authentic evidence and, by extension, of reality itself.