The issue of bias in artificial intelligence is coming for your business — if it hasn’t already. AI Bias has spawned lawsuits in most legal fields, including:
One way it can go wrong is through bias. As the U.S. National Institute for Standards and Technology (“NIST”) has explained:
In other words, like every computer system, AI follows the maxim “garbage in, garbage out.” NIST identifies three categories of AI bias inputs:
- Systemic biases, which “result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others being disadvantaged or devalued.” These do not necessarily result from any conscious discriminatory intent, “but rather of the majority following existing rules or norms.” This is also called “institutional” or “historical” bias.[11]
- Statistical and computational biases, which arise from errors that result when the sample provided to the AI does not represent the population as a whole. The AI analyzing the data may not be able to extrapolate beyond the data it is given, leading to errors. As NIST explains, “These biases arise from systematic as opposed to random error and can occur in the absence of prejudice, partiality, or discriminatory intent.”[12]
- Human biases, which “are often implicit and tend to relate to how an individual or group perceives information . . . to make a decision or fill in missing or unknown information. These biases are omnipresent in the institutional, group, and individual decision making processes across the AI lifecycle, and in the use of AI applications once deployed.”[13]
Essentially, AI can recognize discriminatory content, even when humans may not. Having recognized a discriminatory pattern, AI will apply that pattern in its output, thus embedding and perpetuating the discrimination that it noticed. In other words, just as children are much better at imitating their parents than following their parents’ instructions, AI is much better at noticing and applying discriminatory patterns than complying with efforts to eliminate discriminatory outcomes.
That your business could be liable for discrimination without its intent may trouble you. Most anti-discrimination laws require evidence of intent, and if no human working for your business has the requisite discriminatory intent, the path to liability is not necessarily straightforward. Practically speaking, however, few government regulators or juries will likely accept you waving toward your IT department while saying, “The computer made me do it.” After all, if you know your dog bites people, and you let it off leash in a crowded playground, it is hard to say your act is “unintentional” in any meaningful sense of the word. And if your AI consistently disfavors a protected class, at some point your continued use of that AI probably transitions from “accidental” to “reckless” to “intentional.”
Several federal agencies have taken this view. In April 2023, the Consumer Finance Protection Bureau, the Department of Justice Civil Rights Division, the EEOC, and the FTC issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.” The Joint Statement explained, in literal bold letters: “Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law.” In particular, the Statement targeted unrepresentative or imbalanced datasets, opaque automated systems, or flawed design. The agencies “pledge[d] to vigorously use [their] collective authorities to protect individuals rights regardless of whether legal violations occur through traditional means or advanced technologies.”
- Monitoring. AI is not “set it and forget it.” Businesses should attempt to control their risk from AI bias by monitoring for potential bias issues and alerting the proper personnel when the monitoring reveals a potential problem. Through appropriate monitoring, you can know about a potential liability before a lawsuit or a government enforcement action tells you about it.[24]
- Written Policies and Procedures. Businesses should have robust written policies and procedures for all important aspects of their business, and AI is no exception. Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business subunits, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and detail any plans related to incident response for AI systems.[25]
- Accountability. Your business should have a person or team in place who is responsible for protecting against AI bias, or else your AI governance efforts will probably go to waste. Ideally, the accountable person or team should have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be. And accountability mandates should also be embedded within and across the teams involved in the use of AI systems.
Expecting our flawed society to build a perfect machine was never realistic. And minimizing AI biases will be a painstaking and complicated process. However, if you wish to protect your business from AI bias lawsuits and enforcement actions, start that process now.
Sean Griffin is a partner at Longman & Van Grack. He is one of the world’s first experts certified as an Artificial Intelligence Governance Professional by the International Association of Privacy Professionals, and he is a Certified Information Privacy Professional within the United States. Additionally, Sean is a member of the International Association of Defense Counsel (“IADC”), where he Co-Chairs the AI Committee and serves as a Vice Chair of the Cyber Security, Data Privacy and Technology Committee. Sean is also a member of the Association of Defense Trial Attorneys (“ADTA”), where he Chairs the Artificial Intelligence Steering Committee. Sean litigates business disputes in Maryland, Virginia, and Washington, DC. You can reach Sean directly via email at [email protected] or via phone at 202-836-7828.


Sean Griffin is a partner at Longman & Van Grack. He is one of the world’s first experts certified as an Artificial Intelligence Governance Professional by the International Association of Privacy Professionals, and he is a Certified Information Privacy Professional within the United States. Additionally, Sean is a member of the International Association of Defense Counsel (“IADC”), where he Co-Chairs the AI Committee and serves as a Vice Chair of the Cyber Security, Data Privacy and Technology Committee. Sean is also a member of the Association of Defense Trial Attorneys (“ADTA”), where he Chairs the Artificial Intelligence Steering Committee. Sean litigates business disputes in Maryland, Virginia, and Washington, DC. You can reach Sean directly via email at