Recent years have seen an explosion of scholarship on “personalized law.” Commentators foresee a world in which regulators armed with big data and machine learning techniques determine the optimal legal rule for every regulated party, then instantaneously disseminate their decisions via smartphones and other “smart” devices. They envision a legal utopia in which every fact pattern is assigned society’s preferred legal treatment in real time.
But regulation is a dynamic process; regulated parties react to law. They change their behavior to pursue their preferred outcomes— which often diverge from society’s—and they will continue to do so under personalized law: They will provide regulators with incomplete or inaccurate information. They will attempt to manipulate the algorithms underlying personalized laws by taking actions intended to disguise their true characteristics. Personalized law can also (unintentionally) encourage regulated parties to act in socially undesirable ways, a phenomenon known as moral hazard.
Moreover, regulators seeking to combat these dynamics will face significant constraints. Regulators will have imperfect information, both because of privacy concerns and because regulated parties and intermediaries will muddle regulators’ data. They may lack the authority or the political will to respond to regulated parties’ behavior. The transparency requirements of a democratic society may hinder their ability to thwart gamesmanship. Concerns about unintended consequences may further lower regulators’ willingness to personalize law.
Taken together, these dynamics will limit personalized law’s ability to optimally match facts to legal outcomes. Personalized law may be a step forward, but it will not produce the utopian outcomes that some envision.