Deepfakes Legal

Deepfake Laws in the United States

By: Lawrence G. Walters, Esq. & Bobby Desmond, Esq.
February 20, 2023

Deepfakes Legal

Deepfakes Legal – Deepfakes and other online content generated by artificial intelligence has recently moved up from merely being a theoretical interest amongst the most highly educated individuals in the tech world to a trending topic of discussion among almost all Americans. With websites like ChatGPT, mobile applications like Lensa, and social media sites like Twitter, the average American is coming across deepfakes and other A.I. content firsthand for the first time. Unfortunately, some individuals have experienced the harmful impacts that can be caused by malicious deepfakes. Traditional laws governing intellectual property and revenge porn often do not specifically address artificially generated images including deepfakes. As such, the calls for legislation in the area have resulted in numerous laws at the state and federal level.

In this article, we look at existing federal and state laws which authorize research into deepfake detection, criminalize certain deepfakes, and create private rights of action for deepfakes that cause harm. We also highlight the potential legal developments that deepfake creators and distributors should anticipate in the future. Finally, we discuss how existing laws and policies are already being and may potentially be enforced against harmful deepfakes.

Deepfakes Legal: Federal Laws

Deepfakes are not regulated at the federal level, but Congress has passed three laws requiring the federal government to study and report on the weaponization of deepfakes by foreign entities that seek to spread disinformation and engage in other malign activities. Most notably, Congress included provisions related to deepfakes in the annual defense spending bills in 2020 and 2021.

Deepfakes Legal – National Defense Authorization Act

First, the 2020 NDAA required the publication of an unclassified report on the potential national security impacts of deepfakes. Among other things, the report must include (1) an assessment of the deepfake capabilities of foreign governments, (2) an annex of foreign entities that support or facilitate research, development, or dissemination of deepfakes, (3) an assessment of actual or potential technologies to counter, deter, detect, and attribute the use of deepfakes by foreign adversaries, and (4) a list of additional laws, resources, and personnel necessary to address the threat of deepfakes. The 2021 NDAA expanded this reporting requirement for five years. In 2021, the Department of Homeland Security released the first of these reports titled Increasing Threats of Deepfake Identities.

Second, the 2020 NDAA requires the Director of National Intelligence (“DNI”) to notify Congress each time the DNI determines that a foreign entity is deploying deepfakes for the purpose of election interference.

Third, the 2020 NDAA authorized the DNI to award up to $5 million to one or more winners in a competition to stimulate the research, development, and commercialization of deepfake detection technologies.

Identifying Outputs of Generative Adversarial Networks Act

Separately, the Identifying Outputs of Generative Adversarial Networks Act was enacted in late 2020. Under this federal law, the National Science Foundation is required to support research on deepfakes, including the development of content authentication and deepfake detection tools, social and behavioral research on human engagement with deepfakes, and research on the best practices for educating the public on deepfakes. The National Institute of Standards and Technology (“NIST”) is required to support research on the development of deepfake technologies, and to conduct outreach and issue a report on the feasibility of a public-private partnership to adopt voluntary standards for deepfake technologies. The NIST hosts the annual Open Media Forensics Challenge which facilitates the development of deepfake detecting systems.

Deepfakes Legal: State Laws

At least seven states have passed laws aimed at curtailing the negative impacts of deepfakes. Deepfake laws at the state level regulate the use of deepfakes in in four contexts: (1) election interference, (2) nonconsensual sexual deepfakes, (3) nonconsensual use of a celebrity’s likeness, and/or (4) sexual deepfakes depicting identifiable minors. Other states have considered but failed to pass similar laws, including Hawaii, Illinois, Maine, Massachusetts, and New Jersey.

Use of state laws related to deepfakes against foreign individuals and entities may be ineffective due to the jurisdictional issues, while use of these laws against American citizens may face First Amendment challenges.


Virginia criminalized the malicious dissemination or sale of nonconsensual sexual deepfakes with the intent to coerce, harass, or intimidate. Violation of the law is a Class 1 misdemeanor punishable by up to a year in jail and a fine of $2,500.


Texas criminalized the creation, publication, and distribution of deepfakes within thirty days of an election for the purpose of injuring a candidate or influencing an election. Violation of the law is a Class A misdemeanor punishable by up to a year in the county jail and a fine of $4,000.


Georgia criminalized nonconsensual sexual deepfakes. The first electronic transmission or posting of a nonconsensual sexual deepfake for harassment or causing financial loss to the depicted person is a misdemeanor of a high and aggravated nature. Subsequent violations and violations made to a website that advertises or promotes sexually explicit conduct are felonies punishable by imprisonment up to five years and a fine of up to $100,000.


California has criminal and civil laws related to use of deepfakes for election interference, and California has civil laws related to non-consensual sexual deepfakes.

California election law prohibits the malicious production, distribution, publication, or broadcast of campaign materials that contain deepfakes, unless the deepfake includes a sufficient disclaimer that “This picture is not an accurate representation of fact.” Any registered voter may seek a temporary restraining order or injunction against such deepfakes. Candidates depicted in such deepfakes may bring a civil action seeking damages equal to (1) the cost of producing, distributing, publishing, or broadcasting the campaign materials, and (2) reasonable attorney’s fees. However, this provision does not apply to FCC-licensed stations, nor to the publisher or employee of a newspaper, magazine, or other periodical published on a regular basis.

Further, California criminalized the malicious distribution of deepfakes of a candidate within 60 days of an election with the intent to injure the candidate or deceive voters without the use of a sufficient disclaimer that “This image, video, and/or audio has been manipulated.”

A candidate depicted in a deepfake within 60 days of an election may seek an injunction or other equitable relief, general or special damages, and reasonable attorney’s fees. However, this provision does not apply to (1) radio and television stations that broadcast the deepfake as part of a bona fide newscast where the broadcast clearly acknowledges the questionable nature of the deepfake’s authenticity, (2) radio and television stations that are paid to broadcast the deepfake, (3) a website or regularly published newspaper, magazine, or periodical that routinely carries news of general interest if the publication clearly states the deepfake does not accurately represent the candidate, and (4) cases of satire or parody.

Separately, California created a private right of action against for the creation or intentional disclosure of non-consensual sexual deepfakes. The law explicitly states that including a disclaimer that the material is a deepfake is not a defense to a violation of the law.

Under the law, consent to be depicted in sexual deepfakes requires a written agreement that is rescindable for at least three business days, unless (1) the depicted individual is given at least 72 hours to review the agreement before signing it, or (2) the depicted individual’s attorney, talent agent, or personal manager provides written approval of the agreement.

Plaintiffs may recover (1) economic and noneconomic damages proximately caused by the disclosure, including damages for emotional distress, or (2) statutory damages (a) between $1,500 and $30,000, or (ii) $150,000 in cases of willfulness. Plaintiffs may also recover (1) the defendant’s monetary gain from the creation, development, or disclosure of the deepfake, (2) punitive damages, (3) reasonable attorney’s fees and costs, and (4) any other available relief, including injunctive relief. Claims must be brought within three years of the date the creation, development, or disclosure was discovered or should have been discovered with reasonable diligence.

New York

New York created a private right of action for disclosing, disseminating, or publishing nonconsensual sexual deepfakes. Like California, inclusion of a disclaimer is not a sufficient defense, and consent requires a written agreement that is rescindable for at least three business days, unless (1) the depicted individual is given at least 72 hours to review the agreement before signing it, or (2) the depicted individual’s attorney, talent agent, or personal manager provides written approval of the agreement. Unlike the California law, plaintiffs in New York may seek injunctive relief, punitive damages, compensatory damages, and reasonable court costs and attorney’s fees (1) within three years of dissemination or publication, or (2) one year from the date of discovery or when discover reasonably should have been discovered.

New York also extended its right of publicity to prohibit the nonconsensual use of a deepfake depicting a celebrity who was domiciled in the state at the time of death in (1) an advertisement, or (2) a scripted audiovisual work as a fictional character or a live performance of a musical work without a conspicuous disclaimer in the credits or and any advertisements for the work. Plaintiffs may seek statutory damages of two thousand dollars or compensatory damages including any profits attributable to the use and punitive damages. This right is a property right freely transferable or descendible by contract, license, gift, trust, or other testamentary instrument.


In Florida, it is a third-degree felony to willfully and maliciously sell, give, provide, lend, mail, deliver, transfer, transmit, transmute, publish, distribute, circulate, disseminate, present, exhibit, send, post, share, or advertise nonconsensual sexual deepfakes, even if a disclaimer is included. A depicted individual may also institute a private right of action for injunctive relief, monetary damages (calculated as actual damages or statutory damages of $10,000), and reasonable attorney fees.

Child Sexual Abuse Materials

Federal law has long prohibited digitized depictions of actual minors in a sexual context. State laws may also be applicable to sexual deepfakes of minors. For example, Florida and Maryland amended their laws to explicitly include sexual deepfakes of identifiable children.

Deepfakes Legal: Future Legal Developments

More states are likely to follow the lead set by the six states above and pass similar laws regulating deepfakes of political candidates, celebrities, and non-consensually nude individuals. However, digital rights groups such as the Electronic Frontier Foundation have warned that such governmental interference is likely to capture material that is not harmful and thereby chill First Amendment protected speech.

At the federal level, similar laws could also be passed to set a national standard for deepfakes that violate election integrity or privacy rights. Some federal bills have even gone further, attempting to tackle the negative effects of deepfakes with different measures not yet imposed by any state. For example, Rep. Yvette Clark (D- NY) has twice introduced the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, which would criminalize deepfakes that lack an adequate watermark or disclosure. It is likely the bill will be reintroduced this year.

America is trailing behind other countries in deepfakes legal issues. For example, China adopted rules requiring deepfakes to have the depicted individual’s consent and bear a digital signature. Further, deepfake creators in China must offer ways to “refute rumors” related to their creations.

Deepfakes Legal: Existing Enforcement Mechanisms

Perhaps the most telling future development to watch is whether the recently passed state laws prove an effective enforcement mechanism. So far, there have been no notable cases of enforcement of these new state laws against deepfake creators and distributors. For example, the mayor of Houston unsuccessfully called for a criminal investigation of his opponent for allegedly violating Texas’s deepfake election interference law.

Instead, creation and enforcement of anti-deepfake policy has largely been left to Big Tech. For example, many artificial intelligence companies (including the immensely popular ChatGPT) prohibit use of their content-generation software for sexual content, and many social media sites (including Facebook and Twitter) have policies against misleading deepfakes.

Of course, new laws are not necessarily required to combat the negative effects of deepfakes. The applicability of existing copyright laws to deepfake technology is a rapidly developing area of case law. Because artificial intelligence systems rely on access to millions of publicly available works to formulate their answers and artworks, the authors and artists of those underlying works have begun to file claims that the A.I. systems violate their copyrights by creating derivative works based upon their protected creations. The results of these infringement claims are likely to have a major impact on the A.I. industry broadly and the deepfake industry more specifically.

On the other hand, federal trademark law may be useful against distributors of unlicensed deepfake concerts and fictional works depicting celebrities with registered or common law rights in their name or image, or corporations that own such rights in famous characters.

Similarly, victims of deepfakes may attempt to use state defamation laws to their advantage.

Admittedly, existing law has proven inapplicable in some instances. For example, the Copyright Office currently holds the position that artworks created by artificial intelligence are not registrable, because such works are not works of human authorship. As such, content creators that rely on A.I. to create their works cannot protect those works with the full benefit of copyright law. Moreover, deepfake creators are unable to utilize the DMCA notice and takedown procedure to seek removal of their computer-generated content from pirate sites and third party social media accounts. It is possible that federal copyright law could be updated to take into consideration the advancement of new deepfake technologies.

Walters Law Group has advocated for the rights of adult industry performers, producers, and publishers for over 30 years. Nothing in this article is intended as legal advice. Walters Law Group can be reached through its website:, or on social media @walterslawgroup.

Are deepfakes illegal?

Some states have passed laws prohibiting certain types of deepfakes. There is no current federal law prohibiting deepfakes. Some deepfake material may violate copyrights, privacy rights, and publicity rights.

Do deepfakes implicate national security?

Yes. Deepfakes can be used to embarrass or blackmail politicians or government officials with access to classified information. Federal law requires the Department of Homeland Security to submit an annual report on deepfakes and similar materials.

What are the laws around deepfakes?

While federal law does not prohibit deepfakes, the Department of Homeland Security must submit an annual report on deepfakes and similar materials. At least 7 states prohibit some types of deepfakes. Other existing laws may allow victims of deepfakes to sue. The First Amendment imposes some limits on the government’s authority to prohibit deepfake material.