Home > Journals > WMLR > Vol. 66 (2024-2025) > Iss. 1 (2024)
William & Mary Law Review
Abstract
Section 230 of the Communications Decency Act generally immunizes online platforms such as Facebook, YouTube, Amazon, and Uber from liability for third-party user content (for example, posts, comments, and videos) and for moderation of that content. This Article addresses an important issue overlooked by both defenders and critics of Section 230: the implications of the law and proposed reforms for Black communities in particular. By relieving tech platforms of most legal liability for third-party content, Section 230 helps facilitate Black social activism, entrepreneurship, and artistic creativity. Section 230 also relieves platforms of most legal liability for content moderation, which boosts platforms’ freedom to remove or downrank unlawful activity, as well as an array of “lawful but awful” content that government cannot constitutionally restrict—such as hate speech, white supremacy organizing, medical disinformation, and political disinformation. However, platforms’ overly broad interpretations of Section 230 also incentivize platforms to allow unlawful activity directed at Black communities (such as harassment, white supremacist violence, voter intimidation, and housing and employment discrimination) and to prevent legal recourse when platforms erroneously downrank Black content. These insights provide factors that can help policymakers assess whether proposed Section 230 reforms—such as notice-and-takedown, content neutrality, and carve-outs to immunity for civil rights laws, algorithmic recommendations, or advertisements—will benefit or harm Black communities.