In the ever-evolving landscape of trying to find the perfect employee candidate, Artificial Intelligence ("AI") has taken the world by storm. While AI may feel like a superhero in the hiring world, swooping in to save the day with its ability to sift through resumes faster than you can say "qualified candidate," it also comes with its own legal challenges that employers must tackle. AI is rapidly transforming industries, with organizations increasingly integrating AI-driven tools into their operations. According to McKinsey & Company, as of 2024, around 78% of organizations had adopted AI technologies, and there is no shortage of options available for organizations.[1] A quick Google search for "AI" can reveal the 150 top AI companies to assist with such a project.
As helpful as AI can be to companies, it presents several considerable risks. When implementing AI, organizations should be aware of the legal and ethical challenges it presents to avoid unwittingly stepping on any legal landmines. One of the risks that companies should consider when implementing AI, to assess prospective employees, is how it may affect the decision-making process in hiring.
Employers looking for ways to streamline the candidate vetting work have deployed AI tools in an effort to find higher-quality candidates and reduce the overall time it takes to hire. Despite the perceived advantages of using AI in the hiring process, there are associated risks. Notable policy changes regarding AI have already occurred during President Trump's current Administration. These include President Trump's Executive Order reducing government oversight of AI. As a result, the Equal Employment Opportunity Commission ("EEOC")[2] and the Department of Labor ("DOL")[3] have rescinded their prior guidance on AI in the workplace. The key highlights of the EEOC's prior guidance included that a) AI could be found to have a disparate impact in violation of Title VII; b) employers are responsible for any adverse impact caused by AI tools that are purchased or administered by third-party AI vendors; and c) employers are advised to assess early and often the impact of selection tools that use AI to make or inform employment decisions. The Department of Labor's guidance was intended to serve as a roadmap for employers to "establish governance structures, be accountable to leadership, to produce guidance and provide coordination to ensure consistency across organizational components when adopting and implementing worker-impacting AI systems." In addition, the DOL tells employers that they should ensure the AI systems they use are compliant with anti-discrimination laws, conduct routine monitoring for these effects, and provide workers with advance notice with appropriate disclosure if they intend to use AI.
Even with these changes and rescinded guidance, the risk of using AI in the hiring space is still not eliminated. The Trump Administration's Executive Order does not change or eliminate laws such as Title VII of the Civil Rights Act[4] and the Americans with Disabilities Act[5], which can be violated when using AI. Of course, state laws are also implicated, like the Ohio Civil Rights Act[6], which prohibits employment discrimination based on race, gender, age, and other protected classes.
When using AI in the hiring process, there are a few things employers should be on the lookout for, as they present liability exposure. These include disparate impact discrimination and disability discrimination. Even though AI tools are technology tools, they can still be capable of discriminatory actions, even if they are not intentional. For instance, depending on the query presented to the tool by the human user, the AI may disproportionately exclude, or disadvantage protected groups, which actions may violate Title VII or rule out candidates on disability-related characteristics in violation of the ADA. Even unintentionally, using AI in the hiring process can create a disparate impact for which the employer can be held liable. For example, companies have tried to implement machine learning technology that would allow the human user to put in candidates' resumes for the AI tool to evaluate and spit out the best ones to hire. In one particular case, the AI system, trained on past hiring data, inadvertently "learned" to favor male candidates. Resumes containing words such as "women" or references to women's sports were penalized.[7] This example highlights how AI can reinforce existing biases if not carefully monitored.
Despite executive orders and reduced federal oversight, state and local governments continue implementing AI regulations, including recent activity in Colorado, Illinois, and New York. Some key points in these regulations include providing advance notice of using AI tools, using reasonable care, and conducting bias audits.[8] These state-level efforts establish best practices that employers nationwide should consider adopting.
Employers should monitor their systems regularly if they plan to implement AI in the screening and hiring process including conducting bias audits and fairness testing to ensure the AI tool does not disproportionately impact any protected groups. Employers should inform candidates that AI tools are used in the hiring process and provide candidates with the opportunity to request a human review of AI-based decisions. Just because an employer is using AI does not mean it should eliminate the human oversight of the hiring process. While AI can be a tremendous tool when sorting employee candidates, it should not be calling all the shots. Employees should ensure that anyone using these AI tools is trained in the product and remains up-to-date with the regulations. Employers should review the vendor agreements for the AI tools they use to understand how they function and ensure compliance with federal and state laws and regulations. Lastly, employers should stay informed about legal updates and developments to keep up with the ever-evolving landscape of AI.
While AI certainly can offer substantial advantages in hiring, it also introduces a new twist on well-worn labor and employment law complexities that employers must navigate carefully. By implementing best practices and maintaining compliance with existing laws, businesses can leverage AI effectively while minimizing legal risks. Employers should prioritize transparency, regular audits, and human oversight to ensure AI enhances, rather than undermines, fair hiring practices.
_________________
[1] Alex Singla et al., The state of AI: How organizations are rewiring to capture value, Quantum Black AI by McKinsey (last visited April 23, 2025).
[2] The guidance was titled "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964." but has since been removed from the EEOC website.
[3] The guidance was titled "Department of Labor releases AI Best Practices roadmap for developers, employers, building on AI principles for worker well-being." At the top of the U.S. Department of Labor's website is a note stating that "As of 01/20/2025, information in some news releases may be out of date or not reflect current policies."
[4] Title VII of the Civil Rights Act prohibits employment discrimination based on race, color, religion, sex, and national origin.
[5] The Americans with Disabilities Act (ADA) protects individuals with disabilities from discrimination in employment, public services, and other areas of daily life, ensuring equal opportunities and accessibility.
[6] R.C. 4112. The Ohio Civil Rights Act is enforced by the Ohio Civil Rights Commission (OCRC) and provides legal remedies for individuals who experience unlawful discrimination.
[7] Robert Iriondo, Amazon Scraps Secret AI Recruiting Engine that Showed Biases Against Women, Carnegie Mellon University (last visited April 23, 2025).
[8] S.B. 205, 75th Gen. Assemb., Reg. Sess. (Colo. 2024); H.B. 2557, Artificial Intelligence Video Interview Act, 101st Gen. Assemb. (Ill. 2019); NY City Admin. Code § 20-871.