
Protecting Against Bias in Artificial Intelligence – AI Bias
The issue of bias in artificial intelligence is coming for your business — if it hasn’t already. AI Bias has spawned lawsuits in most legal fields, including:
Insurance Coverage. As of November 2023, life insurers in Colorado using AI must establish a governance and management framework helps determine whether AI “potentially result[s] in unfair discrimination with respect to race and remediate unfair discrimination, if detected.”[1] Following Colorado’s lead, insurance regulators in New York, California, and Washington, DC, have all issued notices and warnings directing carriers to show that their AI models do not impose any unfair bias.[2] This follows several class actions against insurance companies alleging that insurers used illegally biased AI to deny claims.[3]
Employment Law. In August 2023, three integrated English-language tutoring companies going by the name “iTutorGroup” paid $365,000 to settle an EEOC lawsuit alleging that they had used AI to discriminate against older job applicants.[4] Remarking on the lawsuit, EEOC Chair Charlotte A. Burrows wrote, “Age discrimination is unjust and unlawful. Even when technology automates the discrimination, the employer is still responsible.”[5]
Premises Liability. In December 2023, the Federal Trade Commission sued Rite Aid on the ground that its facial recognition program generated false positives that wrongly subjected customers to increased surveillance or banning them from the store altogether. Specifically, the complaint alleged, “Rite Aid’s failures caused and were likely to cause substantial injury to consumers, and especially to Black, Asian, Latino, and women consumers.”[6] Rite-Aid settled days after the complaint was filed.[7]
Civil Rights. In February 2023, Detroit police arrested Porcha Woodruff, an African American woman who was eight months pregnant, for robbery and carjacking after facial recognition technology identified her from surveillance video. A month later, prosecutors dropped the case for insufficient evidence – possibly because the woman in the surveillance video was not visibly pregnant. Ms. Woodruff became the sixth person to report being falsely accused of a crime as a result of facial recognition technology – all of whom were Black.[8]
Why does this happen? Basically, AI works by analyzing incoming data and identifying patterns in that data, then using that analysis to generate content, make predictions, and categorize information.[9] Anyone who has used ChatGPT knows how well this pattern recognition and application can work. And anyone who has used autocorrect knows how easily it can go wrong.
One way it can go wrong is through bias. As the U.S. National Institute for Standards and Technology (“NIST”) has explained:
Bias is prevalent in the assumptions about which data should be used, what AI models should be developed, where the AI system should be placed — or if AI is required at all. There are systemic biases at the institutional level that affect how organizations and teams are structured and who controls the decision making processes, and individual and group heuristics and cognitive/perceptual biases throughout the AI lifecycle . . . . Decisions made by end users, downstream decision makers, and policy makers are also impacted by these biases, can reflect limited points of view and lead to biased outcomes. Biases impacting human decision making are usually implicit and unconscious, and therefore unable to be easily controlled or mitigated.[10]
In other words, like every computer system, AI follows the maxim “garbage in, garbage out.” NIST identifies three categories of AI bias inputs:
- Systemic biases, which “result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others being disadvantaged or devalued.” These do not necessarily result from any conscious discriminatory intent, “but rather of the majority following existing rules or norms.” This is also called “institutional” or “historical” bias.[11]
- Statistical and computational biases, which arise from errors that result when the sample provided to the AI does not represent the population as a whole. The AI analyzing the data may not be able to extrapolate beyond the data it is given, leading to errors. As NIST explains, “These biases arise from systematic as opposed to random error and can occur in the absence of prejudice, partiality, or discriminatory intent.”[12]
- Human biases, which “are often implicit and tend to relate to how an individual or group perceives information . . . to make a decision or fill in missing or unknown information. These biases are omnipresent in the institutional, group, and individual decision making processes across the AI lifecycle, and in the use of AI applications once deployed.”[13]
Essentially, AI can recognize discriminatory content, even when humans may not. Having recognized a discriminatory pattern, AI will apply that pattern in its output, thus embedding and perpetuating the discrimination that it noticed. In other words, just as children are much better at imitating their parents than following their parents’ instructions, AI is much better at noticing and applying discriminatory patterns than complying with efforts to eliminate discriminatory outcomes.
For example, ChatGPT is “developed using (1) information that is publicly available on the internet, (2) information that we license from third parties, and (3) information that our users or human trainers provide.”[14] Further, ChatGPT’s creator explains, “We apply filters and remove information that we do not want our models to learn from or output, such as hate speech, adult content, sites that primarily aggregate personal information, and spam.”[15]
These filters and information removal processes have not kept ChatGPT free from bias. When asked to write a program to check if a child’s life should be saved based on race and gender, ChatGPT produced code stating that African American boys should be allowed to die.[16] Prompted to write song lyrics about the suitability of male versus female scientists, ChatGPT wrote: “If you see a woman in a lab coat, She’s probably just there to clean the floor / But if you see a man in a lab coat, Then he’s probably got the knowledge and skills you’re looking for.”[17] And when Bloomberg asked ChatGPT to rate otherwise equal resumes for various job positions, it ranked resumes with names statistically associated with Asian women as the top candidate 17.2% of the time, but Black men only 7.6 percent of the time.[18]
These examples demonstrate that AI can use its pattern-detecting powers to intuit legally prohibited biases from seemingly unrelated data.[19] Or, as a recent AI employment discrimination complaint alleged, AI “can learn to use [prohibited] demographic features by combining other inputs that are correlated with race (or another protected classification), like zip code, college attended, and membership in certain groups.”[20] Moreover, plaintiffs alleged, the AI could learn from the intentional prejudices within its data inputs “or a lack of diversity in the data set.”[21] Once AI takes this step, it can output those biases to your business without your knowledge or intent.[22]
That your business could be liable for discrimination without its intent may trouble you. Most anti-discrimination laws require evidence of intent, and if no human working for your business has the requisite discriminatory intent, the path to liability is not necessarily straightforward. Practically speaking, however, few government regulators or juries will likely accept you waving toward your IT department while saying, “The computer made me do it.” After all, if you know your dog bites people, and you let it off leash in a crowded playground, it is hard to say your act is “unintentional” in any meaningful sense of the word. And if your AI consistently disfavors a protected class, at some point your continued use of that AI probably transitions from “accidental” to “reckless” to “intentional.”
Several federal agencies have taken this view. In April 2023, the Consumer Finance Protection Bureau, the Department of Justice Civil Rights Division, the EEOC, and the FTC issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.” The Joint Statement explained, in literal bold letters: “Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law.” In particular, the Statement targeted unrepresentative or imbalanced datasets, opaque automated systems, or flawed design. The agencies “pledge[d] to vigorously use [their] collective authorities to protect individuals rights regardless of whether legal violations occur through traditional means or advanced technologies.”
To protect your business against lawsuits or enforcement actions based on AI bias, your business should practice AI governance – that is, the ability to direct, manage, and monitor an organization’s AI activities.[23] AI governance is too complex to explain completely here, but put simply, your business should no more uncritically accept AI output than you would sign a contract without reading it. Your business should engage in certain preventative AI governance measures, including but not limited to:
- Monitoring. AI is not “set it and forget it.” Businesses should attempt to control their risk from AI bias by monitoring for potential bias issues and alerting the proper personnel when the monitoring reveals a potential problem. Through appropriate monitoring, you can know about a potential liability before a lawsuit or a government enforcement action tells you about it.[24]
- Written Policies and Procedures. Businesses should have robust written policies and procedures for all important aspects of their business, and AI is no exception. Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business subunits, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and detail any plans related to incident response for AI systems.[25]
- Accountability. Your business should have a person or team in place who is responsible for protecting against AI bias, or else your AI governance efforts will probably go to waste. Ideally, the accountable person or team should have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be. And accountability mandates should also be embedded within and across the teams involved in the use of AI systems.
Expecting our flawed society to build a perfect machine was never realistic. And minimizing AI biases will be a painstaking and complicated process. However, if you wish to protect your business from AI bias lawsuits and enforcement actions, start that process now.
Sean Griffin is a partner at Longman & Van Grack. He is one of the world’s first experts certified as an Artificial Intelligence Governance Professional by the International Association of Privacy Professionals, and he is a Certified Information Privacy Professional within the United States. Additionally, Sean is a member of the International Association of Defense Counsel (“IADC”), where he Co-Chairs the AI Committee and serves as a Vice Chair of the Cyber Security, Data Privacy and Technology Committee. Sean is also a member of the Association of Defense Trial Attorneys (“ADTA”), where he Chairs the Artificial Intelligence Steering Committee. Sean litigates business disputes in Maryland, Virginia, and Washington, DC. You can reach Sean directly via email at sean@lvglawfirm.com or via phone at 202-836-7828.
[1] Colorado Department of Regulatory Agencies, Division of Insurance, 3 CCF 702-10, Regulation 10-1-1, § 5(A).
[2] “Insurers’ AI Use for Coverage Decisions Targeted by Blue States, Bloomberg Law, November 30, 2023, https://news.bloomberglaw.com/insurance/insurers-ai-use-for-coverage-decisions-targeted-by-blue-states
[3] Id.
[4] EEOC v. iTutorGroup, Inc., Case No.: 1:22-cv-2565–PKC-PK (E.D.N.Y.)Joint Notice of Settlement and Request for Approval and Execution of Consent Decree, filed August 9, 2023 (Dkt. #24).
[5] “EEOC Sues iTutorGroup for Age Discrimination,” Press Release, May 5, 2022, https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-discrimination
[6] FTC v. Rite Aid Corp., Case No. 2:23-cv-5023, Complaint (Dkt #1), filed December 19, 2023 in the United States District Court for the Eastern District of Pennsylvania.
[7] “Rite Aid Facial Recognition Lawsuit Shows AI Risks Of Shopping While Black,” December 21, 2023, https://www.forbes.com/sites/shaunharper/2023/12/21/shopping-while-black-in-the-era-of-ai-lessons-from-a-federal-case-against-rite-aid/?sh=8c437e79ddc5
[8] “Eight Months Pregnant and Arrested After False Facial Recognition Match,” New York Times, August 6, 2023, accessed at https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.html
[9] “What is Pattern Recognition? A Gentle Introduction (2024),” by Gaudenz Boesch, https://viso.ai/deep-learning/pattern-recognition/; “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” National Institute of Standards and Technology Special Publication 1270 (“NIST SP 1270”) , p.5, available at https://doi.org/10.6028/NIST.SP.1270.
[10] NIST SP 1270 p.5 (citations omitted).
[11] Id.
[12] Id.
[13] Id.
[14] “How ChatGPT and our language models are developed,” https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed
[15] Id.
[16] https://twitter.com/spiantado/status/1599462375887114240
[17] “OpenAI Chatbot Spits Out Biased Musing, Despite Guardrails,” Bloomberg Equality Newsletter, December 8, 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results?embedded-checkout=true
[18] “OpenAI’s GPT is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias,” Bloomberg, March 7, 2024, https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?utm_source=website&utm_medium=share&utm_campaign=copy
[19] “Proxy Discrimination in the Age of Artificial Intelligence and Big Data, Anya E.R. Prince & Daniel Schwarcz, 105 Iowa L. Rev. 1257 (2020) (“Prince & Schwarcz”), accessible at https://ilr.law.uiowa.edu/print/volume-105-issue-3/proxy-discrimination-in-the-age-of-artificial-intelligence-and-big-data
[20] Mobley v. Workday, Inc., Case 3:23-cv-007700-RFL in the United States District Court for the Northern District of California, Complaint (Dkt. #47), filed February 20, 2024 ¶ 34.
[21] Id. ¶ 35.
[22] Prince & Schwarcz
[23] “Shedding light on AI bias with real world examples,” https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/
[24] Lobel, Orly, The Equality Machine (2022); NIST SP 1270 at 42.
[25] NIST SP 1270 at 43.