Deepfakes, Privacy, and Freedom of Speech

Inauthentic media depictions can harm a person’s privacy and reputation and pose a risk to broader society, as well.  “Deepfake” technology allows the creation of a type of inauthentic media using “deep machine learning” techniques, using a computer to quickly swap or simulate faces, voices, and movements.   

Professor Christa Laser argues that Notice and Takedown procedures available in copyright law can be expanded to protect persons from deepfakes. Professor Eric Goldman thinks that such a reform would inhibit the dissemination of truthful information.

Christa Laser is an Assistant Professor at Cleveland-Marshall College of Law, focusing her work and teaching on intellectual property.  She comes to Cleveland-Marshall after nearly a decade of practice experience as an intellectual property litigator at the law firms WilmerHale and Kirkland & Ellis LLP.

Eric Goldman is Associate Dean for Research, Professor of Law, Co-Director of the High Tech Law Institute, and Supervisor of the Privacy Law Certificate, at Santa Clara University School of Law. His research and teaching focuses on Internet law, and he blogs on that topic at the Technology & Marketing Law Blog.


Professor Laser… your witness:

1. The Problem with Deepfakes.

“Deepfake” technology can be used to depict real people saying and doing things that never actually occurred.  Deepfakes are popularly used as entertainment, with examples such as DeepTomCruise (humorous videos of what appear to be Tom Cruise) or the ReFace app, which allows a user to place his or her face on a celebrity music video or movie clip in seconds.[1] However, deepfake technology is also used in harmful ways.  For example, approximately 90%-95% of deepfakes online are nonconsensual pornography, where a victim’s face is placed onto pornographic content, causing potential psychological and reputational harm.[2]

Deepfakes also pose a risk to corporate and national security.  According to a Private Industry Notification by the FBI in March 2021, deepfakes are already being used by foreign actors in social influence campaigns and could be used for sophisticated impersonation of corporate employees for financial fraud.[3] For example, deepfakes could be used to create videos of company leadership being injured or engaging in offensive conduct to force down a stock’s price or could be used as part of spearphishing attacks (attempts to induce a targeted recipient to share secret information or transfer money to a malicious actor).  Similarly, deepfakes could be used to incite violence by falsely showing government or military officials engaged in offensive conduct.[4] Furthermore, as deepfakes become more widely known, they also enable bad actors to claim that true depictions are fictional, what Professors Chesney and Citron term “the liar’s dividend”.[5]

2. Existing Laws are Helpful but Inadequate

Recent federal legislation has called for research into the potential harms from deepfakes and their risk to national security, e.g., the National Defense Authorization Act of 2021 directs the Pentagon to research potential harms caused by deepfakes depicting the military.[6] And some states have passed laws to begin to address the problem, such as California’s AB 602 which provides a private right of action against creators of nonconsensual deepfake pornography and AB 730 which outlaws manipulated video of politicians within 60 days of an election.[7]  But it remains difficult to stop the spread of harmful deepfakes online.

Victims of deepfake technology—the individuals whose likeness appears in the video without their consent—often have limited practical recourse under existing federal or state law to have the videos quickly removed from online platforms before harm occurs.  Victims of nonconsensual deepfake pornography, for example, could file a state lawsuit for violations of the right of publicity or right of privacy, or for intentional infliction of emotional distress, extortion, harassment, or other state claims against the user who posted the content.[8] But the posting of videos is frequently done anonymously and videos quickly replicate across a platform as they are shared by other users. Even if the correct name of the perpetrator could be found, jurisdiction in the correct forum could be difficult, particularly if the user who posted the content is located abroad.  Moreover, many enforcement actions against nonconsensual pornography are never brought under existing laws because victims might be unable to afford litigation, fear further victimization or publicization, or not want to relive traumatic memories or face questions to their reputation in litigation.[9] Simply put, the burden on victims under existing law is too great to allow for timely and effective removal of the vast majority of deepfake content.  Moreover, legal process might move too slowly before the false depiction has already caused harm.  Some tech companies attempt to self-regulate by removing misinformation and non-consensual pornography, but without the incentives of law, small and niche platforms, such as user-posted pornography sites, continue to host harmful content.

3. An Alternative Model to Address the Problem

Through notice and takedown procedures, the federal Digital Millennium Copyright Act (DMCA) provides incentives for online service providers and platforms to remove public access to content when it receives notice that the content infringes a copyright.  Once an internet provider receives written notice of a copyright infringement, the provider avoids further liability by expeditiously removing the content.  Posters can contest the removal by filing a counternotice stating that their content does not violate the law and consenting to service of process.  The complainant must then file a lawsuit for copyright infringement to keep the content down pending litigation.  Many posters do not file a timely counternotice, in which case the content remains down.  One core benefit of the DMCA’s notice and takedown procedures for copyright holders has been an expanded ability to protect copyrighted works against infringements on the Internet that once evaded prosecution due to issues such as anonymous posting, foreign posters, or repeated or widespread infringement that was too costly and difficult to resolve through federal lawsuits. 

I suggest that the DMCA’s notice and takedown provisions could be expanded or used as a model to provide similar remedies to victims whose rights of privacy and publicity are violated on the Internet.   Victims of deepfakes face similar concerns of difficult prosecution under current law to those faced by copyright holders, such as the anonymity of defendants online and the easy replication of unlawful content.  The DMCA’s notice and takedown process could be expanded to incentivize service providers to remove nonconsensual digitally altered content that violate a victim’s right of publicity or privacy upon request of the person depicted.  Similar counternotice procedures could be provided to enable posters to contest abusive or speech-restrictive takedown requests.

The DMCA’s notice and takedown process has been criticized for abuse, such as copyright holders filing false or harassing takedown notices or ignoring fair use defenses.[10] However, even if the DMCA process could be improved, it provides a more efficient process to pursue propagation of deepfakes on online service platforms than the current model of reliance on lawsuits and is more enforceable than platform self-regulation. To limit abuses, such as politicians using takedown procedures to stifle opposing political speech, the takedown procedures could be limited to altered depictions of private persons or sexually explicit depictions, where the speech and creativity risk to the public of takedown is less severe.  Presumably, politicians and others with a public platform would be able to contest purported non-pornographic deepfakes through their own platforms. Alternatively, non-pornographic depictions, especially depictions of public figures, that are challenged as deepfakes could be branded with a “challenged deepfake” or similar notice rather than removed.

A legitimate question arises whether modifications to Section 230 of the Communications Decency Act would be required.  Section 230 generally protects internet platforms against liability for user-posted content.  Nonetheless, I think that a deepfake removal statute could be drafted to limit conflict with Section 230.  For example, instead of imposing liability on platforms for the content that users post, the statute could instead impose flat statutory penalties on platforms that fail to implement removal or labeling procedures.


Professor Goldman…your witness

I think that Professor Laser’s proposals raise significant Constitutional concerns and may conflict with Section 230 and other legal doctrines. Those topics deserve greater exploration, but I shall limit my comments to the policy implications of her proposals. At bottom, I do not believe that notice and takedown procedures provide a helpful solution to inauthentic media depictions.

1. Inauthentic Media Does Create Significant Society-Wide Challenges

I share Professor Laser’s concerns about the problems that inauthentic media depictions[11] will create, whether they are pornographic or not. We historically have relied upon photos, recorded audio, and recorded videos as highly credible, and often conclusive, evidence of the truth. The widespread proliferation of convincingly faked media would upend that assumption, potentially making it impossible to trust any media depictions. The law cannot solve that problem alone. Corrective responses will need to come from many institutions. For example, I remain hopeful that technologists eventually will develop reliable ways to authenticate media.[12]

2. Internet Services Cannot Determine Media Authenticity

Internet services are often in a poor position to “adjudicate” the authenticity of media, especially when they lack critical context about the content. When technology makes it hard to authenticate depictions, Internet services will struggle—like everyone else—to sort between legitimate and fake media.

3. Notice-and-Takedown Scheme Can Be  Misused to Suppress Truthful Depictions

Any liability scheme that penalizes leave-up decisions and “rewards” removals will inevitably cause Internet services to overremove.[13] Complainers can and will weaponize this tendency by falsely asserting that unwanted media depictions are inauthentic.[14] This is the “liar’s dividend” mentioned by Professor Laser. When Internet services can’t tell what’s authentic or not, a takedown scheme forces the services to treat every complaint as if it’s true, which essentially gives carte blanche to anyone who wants to veto truthful media depictions of them.

Professor Jessica Silbey and I have documented the insatiable demand for using legal tools to obtain such veto rights.[15] People already misuse copyright law to scrub truthful negative information about them based on their privacy and reputational concerns. Professor Laser’s proposed notice-and-takedown scheme would turbocharge this phenomenon, leading to widespread “memory holing” of true but embarrassing media depictions so that people can escape accountability.

To mitigate this risk, Professor Laser would limit the takedown scheme only to “depictions of private persons or sexually explicit depictions.” I don’t think that fixes the problem. Even with respect to sexually explicit depictions, accountability sometimes requires “seeing” the evidence. This is especially true for celebrities and politicians, who can undermine allegations against them using their privilege and status—unless they are publicly confronted with irrefutable evidence. Former Representative Anthony Weiner’s recidivist sexting is one example where a privileged individual could have escaped accountability without public availability of the evidence. Knowing that, he would have weaponized a notice-and-takedown scheme (if it had been available) to hide his tracks. Furthermore, there are many other ways to redress nonconsensual pornography dissemination without a new takedown right, including sui generis laws adopted by virtually every state,[16] many other laws,[17] and voluntary removals by Internet services.[18]

4. The Role of Non-Removal Remedies

Professor Laser raises the possibility that Internet services could use alternative remedies, such as warning labels, rather than outright removal. I applaud this thinking. Greater use of non-removal remedies could help Internet services strike a better balance between conflicting interests like free expression and privacy/reputational concerns.[19] Non-removal remedies are especially useful in circumstances where the Internet service is not certain whether a rule violation occurred,[20] as will be the case for many inauthentic media depictions. I expect many Internet services will voluntarily adopt such responses to inauthentic media over time.

However, mandating non-removal remedies does not really fix the demand for falsified complaints. Sowing doubt about media authenticity often will be enough to help people avoid accountability from truthful depictions.


[1] See, e.g., Professor Laser as Marilyn Monroe.

[2] Estimates by data firm Sensity AI may be found here.

[3] Federal Bureau of Investigation, Cyber Division, Private Industry Notification, PIN 210310-001 (Mar. 10, 2021) (available here).

[4] Hannah Smith & Katherine Mansted, Weaponised deep fakes: National security and democracy (Apr. 1, 2020) (available here).

[5] Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review 1753 (2019), (available here); see also Janosch Delcker, Welcome to the Age of Uncertainty, POLITICO (December 17, 2019) (available here).

[6] Shannon Vavra, Deepfake Laws Emerge as Harassment, Security Threats Come into Focus, CyberScoop (January 11, 2021) (available here).

[7] Kari Paul, California Makes ‘Deepfake’ Videos Illegal, but Law May Be Hard to Enforce, The Guardian (October 7, 2019) (available here).

[8] Eric Goldman & Angie Jin, Judicial Resolution of Nonconsensual Pornography Dissemination Cases, 14 I/S 283, 297 (2018) (available here).

[9] Id. at 290 (“[T]he vast majority of disseminations do not result in enforcement actions.”).

[10] See generally Jennifer Urban, Joe Karaganis, and Brianna Schofield, Notice and Takedown: Online Service Provider and Rightsholder Accounts of Everyday Practice (November 1, 2017). 64 J. Copyright Soc’y 371, (available here).

[11] I use the term “inauthentic media” instead of “deepfakes” because the latter term does not fully capture the universe of concerns. I assume that it will be difficult or impossible to tell the media depiction is inauthentic. If consumers can easily discern the inauthenticity, then the depiction is more likely to constitute parody, satire, commentary, or other constitutionally protected speech.

[12] Many technologists are working hard to address inauthentic media. See, e.g., Deepfake Detection Challenge Results: An Open Initiative to Advance AI, Facebook AI (June 12, 2020)(available here); The Content Authenticity Initiative; Project Origin; and The Coalition for Content Provenance and Authenticity.

[13] Daphne Keller, Empirical Evidence of “Over-Removal” by Internet Companies Under Intermediary Liability Laws, Stanford CIS Blog, Oct. 12, 2015, (available here).

[14] Because complainers have incentives to lie, Congress created a new cause of action for the submission of bogus copyright takedown notices. 17 U.S.C. § 512(f). This provision failed. E.g., Eric Goldman, How Have Section 512(f) Cases Fared Since 2017? (Spoiler: Not Well), Tech. & Mktg. L. Blog, (available here).

[15] Eric Goldman & Jessica Silbey, Copyright’s Memory Hole, 2019 BYU L. Rev. 929.

[16] See the Cyber Civil Rights Initiative

[17] Eric Goldman & Angie Jin, Judicial Resolution of Nonconsensual Pornography Dissemination Cases, 14 I/S 283 (2018).

[18] E.g., Google.

[19] Eric Goldman, Content Moderation Remedies, Mich. Tech. L. Rev. (forthcoming) (available here).

[20] Id.