This Note will argue that Title VII, as courts currently apply the law, does not adequately protect employees from algorithmic discrimination when companies use machine learning to monitor their employees' computers. Part I will provide an introduction to how employee monitoring tools work, how employers are using machine learning in their monitoring programs, and how these programs can discriminate. Because scholars have already done significant work in this area, this Note will not try to replicate this research but will provide an overview of how this discrimination can occur. Parts II and III will then analyze how an employee might prove a Title VII claim. Part II will analyze an employee's claim under the disparate treatment theory of discrimination and ultimately conclude that an employee is unlikely to succeed under this theory of discrimination. Part III then analyzes a potential claim under the disparate impact theory of discrimination, analyzing each of the three prongs of the disparate impact test. This Note ultimately concludes that, although disparate impact appears better suited to address algorithmic discrimination in employee monitoring, an employee is still unlikely to succeed under this theory. Part IV discusses potential ways to address the issue of algorithmic discrimination in employee monitoring and ultimately concludes that a negligent use of technology standard would best suit the interests of both employers and employees. This abstract has been adapted from the author's introduction.
"A Title VII Dead End? Machine Learning and Employee Monitoring,"
William & Mary Law Review Online: Vol. 63, Article 7.
Available at: https://scholarship.law.wm.edu/wmlronline/vol63/iss1/7