•  
  •  
 

William & Mary Business Law Review

Authors

Jackson Smith

Abstract

As “interactive computer services” (social media sites) expanded over the past decade, so too did the prevalence of “social bots,” software programs that mimic human behavior online. The capacity social bots have to exponentially amplify often-harmful content has led to calls for greater accountability from social media companies in the way they manage bot presence on their sites. In response, many social media companies and private researchers have developed bot-detection methodologies to better govern social bot activities. At the same time, the prevalence of harmful content on social media sites has led to calls to reform Section 230 of the Communications Decency Act of 1996, the law that largely immunizes social media sites from liability for third-party content on their platforms. Such reform proposals largely entail making Section 230 immunity contingent on social media companies following new requirements when moderating content. Social bots have been left out of these reform conversations, however. This Note suggests that including specific provisions regulating social bots within broader Section 230 reform will help remedy both outdated Section 230 provisions and malicious social bots’ effects. Fusing characteristics from several Section 230 reform proposals with existing bot-governance technology will help establish a legal foundation for social media companies’ new social bot management requirements. Two suggested requirements are: (1) interactive computer services must have some type of monitoring and classification system that helps users determine the “bot-ness” of social media accounts; and (2) interactive computer services must provide an accessible medium for users to view the data that its monitoring and classification system produces. These requirements will help protect the validity of organic online exchanges and reduce the potential power of deceitful influence campaigns.

Share

COinS