Abstract
The integration of algorithmic decisionmaking and artificial intelligence (“AI”) into facial recognition technology poses new, unprecedented risks to privacy and individual autonomy rights, particularly in urban settings. The murder of Brian Thompson, CEO of UnitedHealthcare, in New York City on December 4, 2024, provides a timely case study to examine the deployment of facial recognition systems by the New York Police Department and other law enforcement agencies to identify the suspect. New York City deploys some of the most sophisticated surveillance architecture in the nation, put into place following the terrorist attacks of September 11, 2001. This Article explores the utilization of facial recognition systems and facial recognition AI in the investigation of Thompson’s murder. Ultimately, because of its limitations, facial recognition AI failed to assist law enforcement in identifying the suspect, Luigi Mangione, who was apprehended less than one week later through non-AI identification: a customer at a McDonald’s restaurant in Altoona, Pennsylvania, alerted a McDonald’s employee, who then reported the suspect to the local police. The benefits of facial recognition AI are uncertain, and its efficacy is largely unproven and untested. Facial recognition technology is largely unregulated and poses significant constitutional concerns. Specifically, this Article contends that the compelled deanonymization of individuals in urban settings results in diminished constitutional protections. It concludes that examining the European Union’s approach to AI oversight offers an important comparative perspective on regulatory approaches to facial recognition AI.
Document Type
Article
Publication Date
6-2025
Publication Information
103 North Carolina Law Review 1535-1572 (2025)
Repository Citation
Hu, Margaret, "Facial Recognition AI" (2025). Faculty Publications. 2334.
https://scholarship.law.wm.edu/facpubs/2334