•  
  •  
 

William & Mary Bill of Rights Journal

Abstract

We consider this issue here and suggest that the current calls for a negative right to be free from AI could very well transform over time into positive claims that demand the use of algorithmic tools by government officials. In Part I, we begin by sketching the current landscape surrounding the adoption of AI by government. That landscape is characterized by strong activist and scholarly voices expressing a pronounced aversion to the use of digital algorithms—and taking a decidedly negative rights tone. In Part II, we show that, although aversion to complex technology might be understandable, that aversion is neither inevitable nor impossible to overcome. We offer several examples of advanced technologies and analytic techniques that in the past have emerged in the face of significant criticism, but which have come to be widely accepted. In fact, there now exists an affirmative expectation—even at times a legal one—that government should use these technologies when making consequential decisions affecting people’s interests.

Given the possibility of legal and, more broadly, public insistence on the use of at least certain kinds of advanced technologies, we put forward in Part III a set of factors that may help lead eventually to widespread acceptance of algorithmic technologies similar to the acceptance of the technologies discussed in Part II. We suggest that a path forward exists that might build a general acceptance of the use of algorithmic tools by governmental entities, a path that would represent a shift from present-day calls for negative-rights protections against AI to eventual positive-rights expectations that good government practices routinely involve the use of AI.

This abstract has been taken from the authors' introduction.

Comments

Written for the symposium Algorithms and the Bill of Rights (2022) at William & Mary Law School.

Share

COinS