William & Mary Bill of Rights Journal
Abstract
The rapid convergence of speech and technology on social media platforms means it is likely the case that, either now or soon, more expressive activity will be regulated by Artificial Intelligence (AI) than by any legislature, regulator, or other government entity. Mark Zuckerberg has repeatedly told Congress and other audiences that AI is the key to resolving Facebook's content moderation challenges, envisioning a moderation regime where algorithms detect and take down speech infringing Facebook's Community Standards ex ante, that is, prior to its public posting and before it reaches other users. According to Zuckerberg, this would eventually replace its initial content moderation practices, which relied more on human moderators and user complaints than on automated detection and removal -- a system that can be slow, inconsistently applied, and often subjects front-line moderators to harrowing emotional harms by exposing them to the worst of the Internet.
This Article argues that private parties' speech-regulation-by-algorithm is itself protected speech. Government efforts to regulate the content moderation of platforms will thus necessarily be barred by the First Amendment, even if that moderation is automated via AI. Nor would users whose speech has been regulated by AI have any better speech-related claims against AI-informed platform moderation decisions than they have had against nonautomated moderation, for the same reason. However, automated front-end filtering of user speech via AI is in serious tension with several core tenets of First Amendment doctrine. Ex ante AI-based content moderation operates in much the same way as a prior restraint; like government prepublication censorship, it gives users no notice of takedowns prior to publication, nor reasons for the takedown decision (at least reasons that a lay user would be capable of understanding). Additionally, it is already clear based on its current use, and the necessities of machine learning processes, that content-based speech regulation via AI is necessarily overinclusive, which is normally sufficient to find a law or regulation unconstitutional under the First Amendment -- a problem that will only grow worse as AI moderates more speech.
Given these concerns, this Article advocates for many of the same robust notice and procedural rights for platform users whose speech is regulated via AI, as the First Amendment compels of governments seeking to regulate private speakers.