Deepfakes, Disinformation, and Systems Disruption

by Daniel Davidson

The last two decades have seen digital information warfare become a central part of politics across the world. Traditionally, states such as Russia have received the brunt of attention for their digital disinformation campaigns. Now, deepfakes and other algorithmic tools can allow individual actors or small, horizontally organized groups to launch highly effective and large-scale disinformation attacks with increasing ease – shifting information warfare from a state vs. state realm to an increasingly network and individual-dominated one.

Deepfakes are artificial intelligence generated photos, video, text, or audio – these can come in the form of editing pre-existing media, like changing what a president said in a speech through an AI-generated voice imitation, or creating completely new media like faces (or cats) that don’t exist. While primarily used to swap porn stars’ faces out with celebrities, deepfakes are considered an imminent threat to political stability and truth. For example, a 2021 report by the FBI warned about foreign actors using deepfakes in a coordinated influence campaign within a year or so.

Individual empowerment

While few might be experts in the deep learning software used to create deepfakes, those that do have released much of their work to the public. Websites and apps allow amateurs to make deepfakes using only their phone. In one case, a parent utilized deepfakes to harass her children’s peers with fake pornographic images she made herself.

As deepfakes are becoming increasingly accessible, they are also becoming more realistic. Researchers have even been able to produce deepfakes from manipulating a single source image of their target. Artificially generated faces have been used for fake LinkedIn profiles to create connections with targets and attempt to gather sensitive information from them. By relying on generative adversarial networks, artificial intelligence can constantly improve itself by playing a cat-and-mouse game with an AI opponent trying to detect if an image is real – meaning deepfakes become more realistic over time.

The ability to generate fake faces is particularly important when it comes to digital disinformation. Likewise, text generation algorithms like GPT-2 that are available to the public can generate fairly realistic text that is difficult to immediately recognize as generated by an algorithm. With a realistic profile picture and frequent unique comments on a social media platform, deepfakes make identifying spam accounts harder for both humans and algorithms. Previously, to have an army of realistic, active, and old social media profiles required either extensive human resources to manage it or money to buy premade accounts. Now, individual actors without much spare time can more easily produce these networks of bot accounts to spread and boost disinformation.

Targeting powerful individuals

This proliferation of easy-to-use deepfake technology enables a wide range of ways for individuals to attack leaders, celebrities, and hierarchical institutions.

State officials and celebrities are especially easy to produce deepfakes of because there is ample high-quality video and audio samples to make accurate renderings from. This could look like a video where a politician admits to a scandal or bad-mouths another country. However, oftentimes minor edits to pre-existing videos (“cheapfakes”) are enough to do the trick. Consider the viral videos (one reposted by President Trump and airing on Fox Business) showing Nancy Pelosi seemingly slurring her words during speeches as if she was drunk. All it took to produce these videos was slowing down the video and pitching up the audio. With the added power of AI, these targeted disinformation attacks could be far more realistic.

Imagine a grainy and shaky video, recorded by a person’s hidden camera à la Project Veritas, of President Biden at an event talking to an individual and saying “Turkey is going to have to choose between NATO or Russia, and if Erdogan makes the wrong choice we’ll make sure he is no longer president.” Such an absurdly inflammatory statement, even if eventually revealed to be a deepfake, might be enough to seriously strain an already precarious relationship between Turkey and the United States. Now imagine an American politician being approached in public and allowing a “fan” to take a video with them. While they might get the politician to say something innocuous such as “We need to make America great again,” the lip movement and audio could be edited to say any host of things.

What these two scenarios have in common is that there is no genuine footage that might still exist of the event. While the unedited versions captured by the media of Pelosi’s speeches can quickly debunk the fakes, Biden’s statements against Turkey might only be able to be debunked by deepfake detection software. Although current detection software is powerful, more diverse and realistic deepfake strategies could eventually come out on top. Even if a video or image is proven fake, trying to convince believers that they should listen to the mainstream media’s claims about a detection software instead of their own eyes and ears might be challenging.

In the same way information leaks might compel a hierarchy to restrict information flows, the creation and release of media or the nature of public appearances might be more controlled to counter deepfakes.

A state official might find it risky to appear in situations where they cannot control the narrative or have independent media outlets covering an interaction. Unique backgrounds or subtle itches of the nose might become necessary in official appearances to make generating deepfakes harder. At the same time, these controlled narratives and appearances could further help push the notion that perhaps that official is not alive at all and thus a coup is warranted (which really happened).

Lisa Kaplan, a disinformation expert with election management experience, suggests the following:

“Film the candidate at any public speaking engagements that would not otherwise have a record, in order to guard against a deepfake or altered video. In that way, the campaign has a record of the event and could turn over raw footage to the public to expose such practices, if needed. We learned this lesson after an altered video was released and moved through a series of authentic and inauthentic accounts, and voters believed the video.”

But this isn’t a perfect defense. Deepfakes allow for mimicry, albeit with less precision. For example, an actor could play out a sex scene or say something vile in their own studio setup and replace their face and voice with the target’s. Resourced groups could replicate the appearance of public speaking engagements such that it is not immediately obvious what the difference is. The options available for attack outweigh the options available for defense.

How hierarchies might respond to deepfakes… and hurt themselves in the process

Increased security might also emerge over fear of phishing attacks. In one case, the voice of a business executive was impersonated over the phone using AI mimicking software, tricking a child company’s CEO into transferring hundreds of thousands of dollars to a bank account to then be stolen. In another case, it is suspected that a person utilized a deepfake to impersonate Leonid Volkov, a Russian politician working for Vladimir Putin’s opposition, Alexei Navalny. The imposter was able to have several digital meetings with European leaders over sensitive political issues – and while the charade was eventually discovered, such a compromise suggests much more effective and dangerous attacks could be possible.

Leaks of compromising information, contacts, plans, video, or audio could give hackers more power to utilize deepfakes in disinformation attacks. An institution reporting any leak of information might encourage actors to spit out fake content and claim it was supposedly discovered from the leak – like a sample recording from a meeting where Jeff Bezos admits to purposefully suppressing unions. There could be no reasonable way to verify such an audio was inauthentic if the deepfake was advanced enough.

Kaplan recounts in a piece published by the Brookings Institute her experiences with combatting disinformation and deepfakes on the campaign trail. To prevent unnecessary leaks, Kaplan suggests in their piece to:

“Replicate a classified environment, to the extent possible. Compartmentalize access to information by team, and create a need-to-know culture to limit risk, so that in the event one staffers email gets compromised, the intruder cannot access all data. Consider also rewarding staff for raising cybersecurity concerns in real time to allow for timely investigation.”

However, this approach might in fact undermine an institution’s ability to respond, adapt, and coordinate to information risks.

In the context of the Assange and Snowden leaks, writer Kevin Carson argues that leaks within hierarchies tend to result in them trying to exercise more control over internal flows of information. He quotes the blogger Aaron Bady who writes:

“The leak… is only the catalyst for the desired counter-overreaction; Wikileaks wants to provoke the conspiracy into turning off its own brain in response to the threat. As it tries to plug its own holes and find the leakers, [Assange] reasons, its component elements will de-synchronize from and turn against each other, de-link from the central processing network, and come undone.”

Carson concludes, quoting Bady again, “This means that ‘the more opaque [an authoritarian institution] becomes to itself (as a defense against the outside gaze), the less able it will be to ‘think’ as a system, to communicate with itself.’”

Bady believes this process makes the security state “dumber and slower and smaller.” Especially when different teams in an institution might need to coordinate over information sharing to determine what is and isn’t real, this balkanization to prevent disinformation/leaks from spreading could end up backfiring. The ability to trust your boss’ voice over the phone in an emergency or coordinate with different teams on what is real or not, especially when facing a disinformation attack, might be lost. Reporting internal leaks could similarly be dangerous. If enough rumors are spread, one is eventually going to hit the mark – what happens then? How can an institution determine whether claims are substantiated or not?

The European officials targeted by the deepfake video calls only learned much later how many attempts this impersonator had made over several domains targeting various leaders across the world. Volkov subsequently claimed that this stunt hurt Navalny’s ability to establish relationships with foreign contacts.

Overwhelming the bureaucracy

Kaplan said that all rumors discovered during the campaign were to be forwarded to a digital director “for further investigation” and to determine the campaign’s response.

This chart provided by Kaplan shows how institutions might go about  judging and mitigating the potential damage of a rumor. While this model might be sufficient for the occasional misinformation, a coordinated attack that utilized several different pieces of disinformation, including deepfakes, could easily overwhelm the capacity of a system, making it unable to function. A single actor could generate several seemingly genuine accounts or pages that produce fake sex tapes, drunken speeches, leaked unsavory audio, or just made-up rumors. The source would be impossible to isolate. Even keeping track of every attack could require extensive resources, and predicting which pieces of disinformation are the most impactful would be difficult. 

How campaigns can protect themselves from deepfakes, disinformation, and social media manipulation

With a “need-to-know” culture of an institution, its own arms may not be able to determine if a leak or piece of disinformation about itself is true or not. Trying to limit the leakage of information while determining if a deepfake may be genuine or not based on institutional knowledge could be hard enough, but relying on software and experts’ analysis could delay an institution’s response even further while a deepfake goes viral online.

As for taking on the surveillance state, Carson notes that with more attacks or subversions of surveillance methods, tactics used and the data range of the state will expand. Unfortunately for hierarchical institutions, that means more “haystack relative to the needle”. For example, the proliferation of deepfakes and fake people might be used to subvert online and offline facial recognition technologies. The opaqueness and adaptability of networks of people make it easy for them to stay ahead of the slow-moving bureaucracy that undergird the state.

Media attempting to verify a leak or video – while it spreads and goes viral online (“the mainstream ignoring it!”) – and being unable to ends in journalists having to report the incident with caution. But saying a deepfake cannot be confirmed, especially if it is something visual, isn’t going to stop most from believing it. Again, deepfake detection technology could fall far behind as deepfakes become more realistic and diverse. A long delay in response by an institution or the media allows disinformation to spread and cement itself as real before being disproven – and determining the relative dangers of a disinformation attack means institutions must take time calculating whether it is worth it to respond to a particular rumor or another. The inability for some arms of a hierarchy to identify that a leak or deepfake is genuine might mean an ineffective response that bites the institution in the butt later when that small and unthreatening rumor turns out to be something very real and significant.

Most importantly, no single piece of disinformation has to be particularly influential. Kaplan notes that, oftentimes, most unsubstantial rumors are just better off ignored and forgotten. However, deepfakes uniquely allow attackers the ability to “flood the zone with shit”, as Steve Bannon put it. No individual deepfake might be worthy enough for a campaign to pay attention to and debunk, and trying to address every deepfake would bog down an institution and leave the feeling in people’s mouths that… well… probably at least one of those rumors is true. Or at least people’s feelings on a matter might have shifted.

The mainstream media says my eyes are lying to me?

Far better than just a simple rumor, deepfakes can be incredibly realistic videos. Even if experts or some software can “prove” it is fake, believing a deepfake is real does not require the sort of mental gymnastics that other false conspiracies do. When institutions try to create defenses against these deepfakes, they (meaning actors inside an institution) might lose the ability to determine the truth themselves.

Much of the current strategy to counter deepfakes is relying on building trust in existing institutions and media – but the actual process of those institutions combating deepfakes might very well undermine that trust. Facebook is already seen as a partisan actor for flagging or removing misinformation or hate groups from its site – now imagine if it was “suppressing the truth” by removing seemingly real videos! Similarly, there are questions of first amendment rights in government regulation of deepfakes, and if such regulation is even feasible in the first place.

There is also the question of what a social media platform might need to prove or disprove about a video to label it as misinformation, or consider it undeterminable and thus not worthy of removal. A 2020 report by the Australian Strategic Policy Institute suggests measures such as media blackouts prior to an important event like an election to hamper the spread of disinformation and deepfakes. By relying on controlling, suppressing, and censoring information, distrust spreads and people might shift to alternative platforms that are more susceptible to disinformation. But, on the other hand, to try and bring up and debunk every deepfake (or even internally investigate them all) is to be bogged down, defined by these deepfakes, and will end up spreading distrust anyway. The report suggests more verification processes, more social media blue checks, and more stringent certification of content and outlets – but these mean less ability for news to respond quickly, constantly keeping itself behind on real or fake information that spreads online so as to not risk sullying its credibility. If institutions have to invest significant time and resources into defending against deepfakes, that is time and resources they can’t spend elsewhere.

With weakened trust in institutions, it is also easy to use the existence of deepfakes as a scapegoat to explain away anything that challenges an individual’s narrative. Anything that is real could have just been a coordinated conspiracy to generate fake news*, such as when people suspected an interview of Biden was a green screen illusion.

The mere existence of deepfakes lets anyone dismiss filmed media with ease

Claiming an institution produced deepfakes themselves could undercut public trust in them**. One study on the effect of deepfakes reveals that, even when a deepfake might fail to convince a person of an untruth, it does sow the seeds of uncertainty and distrust in news on social media. It is likely the same effect could be had on trusting government officials, statements, and actions, especially in the case where any official were to unintentionally help spread a deepfake or give it any shred of legitimacy before it is debunked (which Trump actually did).

The danger of deepfakes is not necessarily that they are impossible to debunk, or that they will mostly trick big institutions and those in power. The danger is that, even if one knows something to be fake or edited, seeing and hearing the same messages again and again will make it feel true. Individuals with little to no experience, power, or resources can easily create masses of disinformation that clogs the veins of large hierarchical institutions, manufactures distrust in said institutions, and shifts the narrative away from those in power. While it is likely deepfakes will first be utilized in mass by state actors, the increasing accessibility and power of deepfake technology means it will be a weapon that empowers networks, whether for good or ill. Already, just cheapfakes and worries over deepfakes have been enough to support ethnic cleansing and motivate a military coup. To truly overcome the problems posed by deepfakes, we must build alternative epistemic technologies that acknowledge the uncertainty inherent to the world, recognize the plurality of perspectives one can take on any given issue, and encourage accuracy-orientated reasoning over more immediate motivated reasoning.


*While deepfakes mean anything can be true… they also suggest anything can be false. Some other ways the existence of deepfakes could be used to deny reality: testimonies from survivors of a school shooting or genocide are deepfakes; a speech was faked because the speaker was incapacitated; George Floyd’s murder was a deepfake; a certain person is no longer alive but replaced by a digital copy; officials actually refused to take the COVID-19 vaccine, instead deepfaking it to appear they did.


**Similar to the backlash that would be likely if Bezos was caught admitting to suppressing unions, Amazon being caught utilizing deepfakes for its anti-union cause could undermine trust as well. Amazon has an ambassador program where employees utilize Twitter to show off what it is like to work for the company. When more of these accounts started popping up, now with deepfake generated profile pictures and odd tweets, many suspected it was a desperate attempt by Amazon to improve their image using bots. However, these new accounts were instead simply trolls making a joke, not genuine Amazon accounts. Yet, the inability to determine the origin of certain deepfakes could allow people to produce hard-to-believe disinformation and then blame an institution for making it. Imagine a host of bot accounts on Twitter tweeting “I love the COVID-19 vaccine, it didn’t make me sick and now I am healthy and I love vaccines – everyone should take their vaccines.” This might feed anti-vaccine conspiracies by suggesting secret malicious interests are at play trying to manipulate people into taking vaccines. While deepfakes allow individuals to proliferate bots and synthetic media, much of the public might still see such campaigns as having to have been coordinated by a large resourced institution.