Deepfakes as a Security Issue: Why Gender Matters

By Agnes E. Venema

What Are Deepfakes?

In some ways, deepfakes are to video what photoshopping is to images. Just like photoshopped images, some are created better than others. It involves the manipulation of videos in such a way that, when well-made, it is impossible to distinguish from an original video. Deepfakes can be created in roughly two different ways: image creation and morphing. Image creation is a process by which a neural network looks at faces and creates their own image based on the samples it has been given. An example of image creation is the website ThisPersonDoesNotExist.

Deepfakes created through morphing merge one face with another, or superimpose expressions of one face onto another, creating video. Combined with voice cloning or using voice actors, morphing can lead to incredibly realistic videos that are entirely fictitious, such as the deepfake of President Nixon delivering the speech that was prepared in case of a moon landing disaster. A distinction must be made between deepfakes and “cheapfakes.” The latter is the manipulation of existing footage by slowing down or speeding up certain sections to exaggerate part of the video or by selectively editing content. Both U.S. Speaker of the House Nancy Pelosi and U.S. journalist Jim Acosta became the target of cheapfakes.

Deepfakes are slowly starting to garner more interest from a broader range of people, because they are starting to become more accessible and prevalent. This shift has arguably come about because of two interrelated phenomena: better quality and proliferation. First of all, the deepfakes we see surfacing today are better than they were last year. The rapid evolution of deepfakes is staggering, and we are already at a point where well-made deepfakes are nearly impossible to detect. This is potentially very problematic in terms of reliability of audio-visual evidence in court, as further elaborated upon in this article

Secondly, the creation of deepfakes used to require quite specific technical knowledge and powerful hardware. However, we are fast moving into a territory where the code for creating deepfakes is readily available online, and fewer pictures are necessary to create a realistic deepfake. While the most realistic deepfakes still require quite a bit of data to train the system, this is fast changing. For instance, we are seeing apps launched that use a single picture to impose that face onto famous movie scenes using the same techniques. Dr. Donovan rightfully points out that the commercialization of deepfakes is the next step. It is a matter of time before we can order personalized deepfake ecards. But what can be used for birthday card fun can also be used for nefarious purposes.

Why are Deepfakes a gendered security issue?

Given that deepfakes can easily be mistaken for real video footage, the potential damage of deepfakes can be immense. This is where the connection to image-based sexual violence, colloquially known as revenge porn, requires further exploration. A 2019 Adult Online Hate, Harassment and Abuse report from the UK lists six types of image-based sexual violence, defining this type of abuse as “sexual photoshopping.” Although the report does not mention the word deepfake, it highlights the fact that the harm suffered is the same as more “traditional” forms of image-based sexual violence. Keeping this in mind, it is worthwhile to note that most research has shown that the vast majority of victims of revenge porn are women (the report cites different studies, putting the percentages between 60 percent and 95 percent). Furthermore, research indicates that men are more likely than women to perpetrate image-based abuse. The gendered aspect of deepfakes, especially when sexual in nature, is therefore not to be underestimated.

How deepfakes, especially sexually explicit ones, affect women differs from country to country and depends on the prevalent views of women at play. Considering deepfakes are hard to distinguish from real videos, it is worthwhile looking at the consequences women suffered from revenge porn. In BBC’s The She Word, two Zimbabwean women told their stories of having become the victim of revenge porn; one was disowned and consequently unable to finish her education, while the other lost her job. In India, journalist Rana Ayyub experienced a deepfake pornographic slander campaign after she advocated for justice for an eight-year old girl who had repeatedly been raped and then killed. These are not uncommon experiences; women have reported that they find it difficult to maintain or find employment after becoming the target of image-based sexual violence. To add insult to injury, some of the online service providers where this type of online abuse takes place have been slow in acknowledging the problem, and reacting to it remains challenging.

In addition to highlighting the professional repercussions, the 2019 report also stressed the intangible effects of image-based sexual violence, such as loss of autonomy, experiencing a violation of privacy, trust issues, and experiencing a silencing effect where these women choose to withdraw from (online) life as a coping strategy. Victims of revenge porn have indicated they suffer from anxiety or depression, PTSD, or substance abuse. Importantly, one study showed that male victims of image-based sexual violence indicate that they feel less shame and blame themselves less for what happened than female victims do in the same situation. Without inferring causality, male victims indicated a higher percentage of positive police responses when they went to report their case than women did. What must be considered, however, is whether the more negative police response to women reporting the abuse is mirroring a wider held societal belief that women should have done more to avoid abuse, better known as victim-blaming. 

Advocates who try to shift the narrative from one blaming women for having created or shared explicit imagery to one that holds those publishing the images accountable can also expect online abuse and vitriol, as this Australian example illustrates. Especially in combination with doxing—the act of making a person’s contact details publically available online—revenge porn, whether deepfake or real, can lead to threats to women’s lives. Rana Ayyub had her phone number published after the deepfake video of her surfaced, leading to a barrage of men contacting her to ask how much she would charge for sex. In some cases, the combination of doxing and revenge porn has led to women receiving threats of rape, being stalked, or being subjected to violence. Research has shown that overall this type of online abuse and harassment disproportionately  affects women

The link with deepfakes is that people can become the target of “revenge porn” without the publisher being in possession of sexually explicit images or footage of the target. They can be created out of any number of casual photographs or images that are scraped from the internet to achieve the same goal. This means that practically everyone who has taken a selfie or posted a picture of themselves online runs the hypothetical risk of having a deepfake created in their image.

Security Implications of Deepfakes

Until recently, deepfakes were the problem of high-profile women who had lots of images posted of them online. More recently, however, deepfakes are starting to gain more attention, likely because of the reasons outlined above: better quality deepfakes, less need for highly technical skills or hardware, and thus a rapid proliferation of deepfakes. Furthermore, because of this proliferation, more people are starting to understand that deepfakes are not only tools to create non-consensual porn, but that they can become another tool in disinformation and fake news campaigns that have the power to sway elections and ignite frozen or low-intensity conflicts. 

In one of the first articles on the topic, Chesney and Citron hypothesize that a deepfake video of U.S. soldiers in Afghanistan could endanger the troops and the broader U.S. Afghanistan policy, or that a deepfake about an imaginary assassination plot could be detrimental for the Iran-Iraq relationship. While those were hypothetical scenarios, the mere existence of a video that was believed to be a deepfake has already led to an attempted coup in Gabon. Malaysia also saw its political landscape affected when a deepfake video of a male aide to a minister confessing to sodomy—which is illegal in Malaysia—and implicating the Minister of Economics Affairs was released. The Prime Minister of Malaysia dismissed the video as a deepfake, but the country was gripped by it, showing just how influential a deepfake can be in swaying popular opinion.

It should therefore come as no surprise that opposition movements, especially those in less democratic countries, are at risk of having deepfakes used against them as a tool to silence them by attacking their credibility, especially in more conservative societies. Consider for example the current crisis in Belarus, where political opposition is headed by three women. One can easily imagine how a strategically released deepfake could damage the credibility of these political leaders. Deepfakes do not, however, have to be lewd or sexual in nature, although that is often perceived as the most scandalous use and affects women disproportionately, as explained above. However, deepfakes alleging a wide range of issues, including corruption and fraud, can have damaging effects. Therefore, the international security implications of strategically released deepfakes, combined with organic and paid for viral content on social media, are not to be underestimated. 

The security implications of deepfakes are therefore threefold: at the international level, strategic deepfakes have the potential to destabilize precarious peace;  at the national level, deepfakes may be used as a tool to unduly influence elections, the political process, or discredit opposition, which is a national security concern, especially if foreign powers are involved in the creation and distribution of such deepfakes; and at the personal level, the scope for using deepfakes in the creation of sexually explicit video has the potential to disproportionately affect women, particularly in the public sphere. Women disproportionately suffer from the exposure of sexually explicit material compared to men and are more often subject to threats to their physical safety. This in turn has ongoing effects on their (mental) health.

Policy Considerations

Certifications and Disclaimers

Policy makers need to be aware that deepfakes are used for a range of legitimate purposes, including artistic and satirical creations. Banning deepfakes outright is therefore not a path consistent with fundamental freedoms, such as the freedom of speech. One possible legislative proposal could include the obligation to include a content warning or disclaimer. We see these types of warnings in television and film relating to product placement and the use of animals, where movies receive “No Animals Were Harmed” end-credit certifications. This could be a non-invasive solution that allows for the use of deepfakes in creative fields but requires producers to take responsibility and inform their audiences. 

Consent and “Know Your Customer”

A further area of concern is the commercialization of deepfakes that Dr. Donovan alluded to. Policy or legislation to manage this upcoming industry needs to consider the privacy of the person whose face may be depicted in the deepfake. This is closely tied to the notion of consent to use one’s image. One policy solution would be to require that service providers ask for such consent before accepting an order to create a deepfake. Given the potential harm of deepfakes, legislators may want to consider “know your customer” legislation that is already in place in the banking and financial services industries. Of course, the fact that internet services can be provided from anywhere in the world means that offshore companies can easily circumvent such legislation, but customers within the jurisdiction may face legal repercussions if they falsely claim to provide images with the consent of the person depicted. 

Digital Literacy

Given the limitations of jurisdictions and the lack of borders in the cyber domain, it is unlikely that legislative or policy proposals as suggested above will eradicate the malicious use of deepfakes. This is why policy makers ought to consider how to best teach digital literacy. Deepfakes rely on the premise that “seeing is believing.” In order to combat this deeply held bias, media literacy projects are sprouting like mushrooms, and various companies, including Microsoft, are launching tools that should help the general public assess the authenticity of video footage. Global standards for digital literacy are being developed and included in school curricula worldwide. Particular attention must also be paid to cohorts that may be hard to reach, due, for instance, to limits on broader technological literacy and arising from implications of the digital divide. 

For instance, Danielle Citron acknowledges there is a generational divide and cites research that claims that the over-55 generation is more likely to spread falsehoods and fake news. Such research was not specifically done on deepfakes, but it is likely that deepfakes will exacerbate this phenomenon. Therefore, any digital literacy policies ought to consider all age groups and all levels of society in order to bridge digital and generational divides. By way of example, targeted strategies for older populations may need to be considered.

Agnes E. Venema is a PhD Researcher pursuing and Marie-Skłodowska Curie scholarship recipient pursuing a joint doctoral degree in Intelligence and National Security at the “Mihai Viteazul” National Intelligence Academy in Romania and the University of Malta on the European Commission funded project Evolving Security SciencE through Networked Technologies, Information policy And Law (ESSENTIAL). Agnes’ research focuses on the intersection of emerging technologies, security, and law. You can follow her on Twitter @gnesvenema.