AI in Hiring: Legal Risks Employers Must Understand
A practical overview of the legal landscape surrounding AI-powered hiring tools and what employers need to do to stay compliant.
AEA Editorial Team
The Rise of AI in Recruitment
Employers across every industry are adopting artificial intelligence tools to screen resumes, conduct video interview assessments, administer skills tests, and rank candidates. These tools promise efficiency gains, but they also carry significant legal risk. Federal, state, and local regulators are increasingly scrutinizing how automated decision-making tools affect protected classes of workers.
If your organization uses any automated tool in the hiring process, you need to understand the current legal framework and take concrete steps to reduce your exposure.
Federal Law Still Applies
The Equal Employment Opportunity Commission (EEOC) has made clear that Title VII of the Civil Rights Act and the Americans with Disabilities Act apply to AI-driven hiring decisions just as they do to human ones. In 2023, the EEOC issued updated technical assistance emphasizing two key points:
- Disparate impact liability: If an AI screening tool disproportionately filters out candidates of a particular race, sex, age, or other protected characteristic, the employer can be liable even if the tool was designed without discriminatory intent. The employer bears the burden of demonstrating that the tool is job-related and consistent with business necessity.
- Disability discrimination: AI tools that assess facial expressions, speech patterns, or other behavioral cues during video interviews may screen out individuals with disabilities. Under the ADA, employers must provide reasonable accommodations, which may mean offering alternative assessment methods.
The key takeaway is that the employer, not the vendor, is ultimately responsible for the lawfulness of hiring decisions made with AI assistance.
State and Local Laws Are Moving Fast
Several jurisdictions have enacted or proposed laws specifically targeting AI in employment:
- New York City Local Law 144 requires employers using automated employment decision tools to conduct annual bias audits performed by independent auditors and to notify candidates that such tools are being used.
- Illinois amended its AI Video Interview Act to require employer consent disclosures before using AI to analyze video interviews.
- Colorado passed legislation requiring developers and deployers of high-risk AI systems, including employment tools, to conduct impact assessments and mitigate algorithmic discrimination.
Employers operating in multiple states should expect this patchwork to expand and should track legislative developments in every jurisdiction where they hire.
Practical Steps for Employers
1. Audit your current tools. Catalog every technology used in your hiring pipeline, from resume parsers to chatbot screeners to assessment platforms. Determine whether each tool uses AI or automated decision-making.
2. Request vendor documentation. Ask your vendors for validation studies, adverse impact analyses, and information about the data sets used to train their models. A reputable vendor should be able to provide this. If they cannot, that is a red flag.
3. Conduct adverse impact testing. Run your own analysis of selection rates by race, sex, age, and other protected categories. The four-fifths rule remains a common benchmark: if the selection rate for a protected group is less than 80% of the rate for the most-selected group, further investigation is warranted.
4. Provide candidate notice. Even where not legally required, disclosing the use of AI tools builds trust and reduces litigation risk. Include information about what data is collected and how it is used.
5. Offer alternatives. Allow candidates to request a human review of their application or an alternative assessment method, particularly where disability accommodations may be needed.
6. Document everything. Maintain records of your validation efforts, vendor communications, adverse impact analyses, and accommodation requests. In the event of a challenge, documentation of good-faith compliance efforts matters.
Looking Ahead
Federal legislation on AI in hiring remains in discussion but has not yet been enacted. However, the direction of regulatory activity is clear: greater transparency, mandatory testing, and employer accountability. Employers who get ahead of these requirements now will be better positioned when broader mandates arrive.
The bottom line is straightforward. AI tools can add genuine value to your hiring process, but they do not relieve you of your obligations under employment law. Treat every automated decision the same way you would treat a human one: with scrutiny, documentation, and a commitment to fairness.