Survivor Safety: Deepfakes and the Negative Impacts of AI Technology 

May 08th, 2024

By Simone Obadia, SALI Legal Intern - Spring 2024

Preventionists, advocates, and service providers who work to address sexual violence must contend with new challenges presented by the rise of widely accessible artificial intelligence (AI) technology. Perpetrators of sexual abuse and exploitation have begun using AI technology to create “deepfakes.” Deepfakes are hyper-realistic synthetic images and videos created using AI software that convincingly replace the individual in the original video or image with the likeness of another person (Harris, 2015). This allows the creators of these representations to make it appear that the person whose face is portrayed in the image or video is engaging in an act that in reality they did not engage in.

Although deepfake technology can be used to produce many different types of content, the users of this form of AI often employ it to fabricate sexually explicit images and videos without the consent of their subjects. People who create pornography were among the first to utilize deepfake technology by superimposing the faces of celebrities into nonconsensual pornographic videos in 2017 (Chesney & Citron, 2019; Smith & Mansted, 2020). Today, approximately ninety-six percent of deepfake videos are pornographic, and many deepfakes depict victims being raped or otherwise sexually abused (Thompson, 2024; Smith & Mansted, 2020). The majority of these victims are female-identifying individuals.

Although sexually explicit deepfakes do not cause physical harm, these artificial images and videos can have significant psychological impacts on victims and be profoundly disempowering to individuals who are forced to see their likenesses being offered for the sexual gratification of others without their consent. Sexually explicit deepfakes are often indistinguishable from real images or videos, enabling them to exploit, humiliate, or blackmail victims whose faces are superimposed onto the bodies of people engaged in sexual conduct without their consent. Victims whose likenesses are exploited in deepfakes may face severe reputational harms, such as an inability to retain employment or having others look up their name online and seeing links to explicit content.

Abusers may find that deepfakes are an easy way to harm their victims. While it may seem as though creating a very realistic image or video of a person doing something they have never been captured doing on camera is a difficult task, very little knowledge is actually required to create deepfakes. Any individual with a computer, the ability to download open-source software, and pictures, or a “faceset,” of the person they hope to create a doctored image or video of, can create a deepfake (Harris, 2015). The casual use of ‘undressing apps’' by minor students illustrates just how user-friendly, and dangerous, this technology is. These applications allow users to input pictures of individuals and receive realistic AI-generated nude images of the subjects. Although students in middle and high schools across the United States are using these applications to fabricate nude images of minor girls, most schools have yet to put regulations in place to protect students from this form of abuse (Haskins, 2024).

Federal and states legislatures have also been slow to regulate this modern form of sexual abuse. In Maryland’s 2024 legislative session that just concluded last month, MCASA supported serval bills to help respond to the use of deepfakes to cause harm. No bills were enacted this session after concerns arose regarding an approach that would have minimized harm caused by deep fakes. MCASA will continue to advocate for stronger protections against AI generated deepfakes and nonconsensual sexually explicit images going forward. It is vital that lawmakers understand that deepfakes are harmful to survivors and must be taken seriously.

While it will likely take time for legislation to adequately address sexual abuse committed using deepfakes, there are steps that technology companies and social media platforms can take to prevent this abuse. These companies can invest in detection technology that flags AI-generated content and integrate the technology into their platforms (U.S. Government Accountability Office, 2024). Currently, deepfakes depicting humans can often be identified by users due to the fact that not all AI technology can realistically depict the more complex human features, such as eyes, ears, and hands. However, as the technology that supports the creation of deepfakes improves, it will likely become more difficult for human beings to detect computer-generated content, increasing the need for content-sharing platforms to build technology into their systems that can perform this function (U.S. Government Accountability Office, 2024).

Additionally, social media platforms can require consumers to sign user agreements that can be enforced against individuals who create abusive deepfakes (Mulvihill, 2024). Because the spread of unauthorized sexual deepfakes has been facilitated by various social media platforms and it is very difficult for individual users to protect themselves from becoming victimized by deepfake creators, these companies should be expected to take on the responsibility of limiting the prevalence of this type of harm (Thompson, 2024).

Individuals and organizations working to address sexual violence must keep up with the new methods being used to cause sexual harm. With the rise in accessibility of AI technology, the use of deepfakes to perpetuate sexual abuse will likely become more prevalent. The time to advocate for legislatures, technology companies, and social media platforms to protect the public from these new threats is now.

References

Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753-1820. https://www.jstor.org/stable/26891938

Harris, D. (2015). Deepfakes: False Pornography is Here and the Law Cannot Protect You. Duke Law & Technology Review, 17(1), 99-128. https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr

Haskins, C. (2024). Florida Middle Schoolers Arrested for Allegedly Creating Deepfake Nudes of Classmates. In Wired. https://www.wired.com/story/florida-teens-arrested-deepfake-nudes-classmates/

Mulvihill, G. (2024). What can be done to stop deepfakes, like the ones that victimized Taylor Swift. In 2WGRZ. https://www.wgrz.com/article/tech/ai-deepfakes/71-593a9781-8929-42d4-b59f-c03b5c7f7894

Smith, H., & Mansted, K. (2020). Weaponised deep fakes. In Weaponised deep fakes: National security and democracy (pp. 11-14). Australian Strategic Policy Institute. http://www.jstor.org/stable/resrep25129.7

Smith, H., & Mansted, K. (2020). What’s a deep fake? In Weaponised deep fakes: National security and democracy (pp. 05-10). Australian Strategic Policy Institute. http://www.jstor.org/stable/resrep25129.6

Thompson, P. (2024). Deepfake porn is a huge problem – here are some of the tools that could help stop it. In Business Insider. https://www.businessinsider.com/deepfake-porn-huge-problem-ai-tools-help-stop-2024-2

U.S. Government Accountability Office (2024). Science & Tech Spotlight: Combating Deepfakes. In GAO Science, Technology Assessment, and Analyticshttps://www.gao.gov/products/gao-24-107292 

Related Articles

Stay In The Loop

Sign up for our mailing list to receive Frontline, MCASA’s quarterly eNewsletter, and stay updated on MCASA’s programs and upcoming events and training in Maryland.

Sign Up