William & Mary Bill of Rights Journal


Anna Pesetski


Technology has advanced rapidly in recent years, greatly benefitting society. One such benefit is people’s ability to have quick and easy access to information through news and social media. A recent concern, however, is that manipulated media, otherwise known as “deepfakes,” are being released and passed off as truth. These videos are crafted with technology that allows the creator to carefully change details of the video’s subject to make him appear to do or say things that he never did. Deepfakes are often depictions of political candidates or leaders and have the potential to influence voter choice, thereby altering the outcome of elections. Deepfakes have already influenced the politics of other countries, and lawmakers expressed legitimate fears about how deepfakes would affect the 2020 United States presidential election.

The current unprotected categories of speech developed during a more primitive technological age. Efforts have been made to combat deepfakes, but they have fallen short of effectively attacking the problem. It may be time for the Supreme Court to reevaluate First Amendment protections in light of the current digital age and consider the benefits of adding a new unprotected content category of speech for deepfakes. The dangers deepfakes present far outweigh the concerns of the potential chilling effects from restrictions on speech. Even though the Court has rejected arguments for new categories of unprotected speech in recent years, deepfakes should ultimately constitute a new content category because of the dangers they pose to the election process and political systems; the “marketplace of ideas” fails to combat their falsity.