Artificial Intelligence in the Preemployment Process

May 25, 2024 | By Joseph M. Milano, Esq.

Legal Issues
An illustration of a robot filing resumes.

TAGS: ai, journal, legal issues,

NACE Journal / Spring 2024

With the rise in artificial intelligence (AI), predictive analytics, and machine learning in recent years, employers now have the ability to integrate these algorithmic decision-making products into their preexisting hiring and recruitment processes. This includes resume screenings, candidate matching, interview scheduling, probability analysis, performance evaluations, and layoff decisions.1

It has been estimated that 55% of companies are increasing investments in these types of AI recruiting measures.2 The general idea behind the use of AI in the preemployment process is to increase objectivity, improve the likelihood that the right candidate is selected, and decrease the recruitment timeline.3 Objectively, these are all good reasons to integrate AI into an employer’s preemployment selection process, but this integration also comes with inherent risks. The most prominent risk resulting from the integration of AI is unintentional discrimination.4

Inherent Legal Risks With AI

One might think that an emotionless and unbiased “robot” would be able to achieve the above-listed goals without issue. However, AI is only as good as the human counterpart that programs it and the information that it is trained on. If not trained properly, AI can unconsciously create or even increase biases in the recruitment process.5 For example, if the AI is trained on biased or small-scale data, it can exclude qualified candidates from certain demographics and lead to discrimination based on a candidate’s age, religion, sex, race, national origin, and disability—protected classes.6

The U.S. Equal Employment Opportunity Commission (EEOC) is responsible for enforcing and issuing guidance on the federal equal employment opportunity (EEO) laws that prohibit discrimination based on these protected classes. As applied to AI, the EEOC recently has offered guidance concerning the risks mentioned above in the preemployment selection process and how these risks may implicate both Title VII of the Civil Rights Act of 1964 (Title VII) and the Americans With Disabilities Act (ADA) under federal law.7, 8

AI Applied to Title VII

Title VII applies to employment practices such as recruitment, monitoring, transfer, and the evaluation of employees. If an employer institutes neutral tests or selection procedures that are not “job related for the position in question and consistent with business necessity” and have the effect of disproportionately excluding persons based on their protected class, this gives rise to legal theories known as “disparate impact” or “adverse impact” under Title VII.

In this way, an employer must display that the AI-integrated selection procedure it uses evaluates an individual’s skills and that these skills are directly correlated to the job in question.9 If the employer does this, they must also display that there is not a less discriminatory alternative available.10

In 1978, the EEOC adopted the “Uniform Guidelines on Employee Selection Procedures” under Title VII to help employers navigate this process. The guidelines provide guidance for employers on how to determine if their test and selection procedures are lawful for purposes of Title VII and the disparate impact analysis. Under the guidelines, a selection procedure is defined as any “measure, combination of measures, or procedure” if it is used as a basis for an employment decision.

Based on this definition, the integration of AI into an employer’s preemployment decision-making process would constitute a “selection procedure” because the AI would inform the employer’s decision on whether to interview or hire a job candidate. Accordingly, the integration of AI is subject to the disparate impact analysis and employers are obligated to screen their use of AI for the likelihood that it will produce an adverse impact. In order to do this, employers may look at the guidelines, which explain that a selection procedure can be screened for an adverse impact by checking whether the use of the selection procedure causes a selection rate for individuals in a protected class that is “substantially” less than the selection rate of those who are not.11 The guidelines use the “four-fifths rule” in order to determine whether the selection rate for one group is substantially different than the selection rate of another group. This rule states that one selection rate is substantially different than another if their ratio is less than 80%. Despite the easy applicability of the four-fifths rule, courts have agreed that the use of the rule is not always appropriate, especially where a test of statistical significance is more appropriate in cases presenting small sample sizes.

Accordingly, if there is a substantial difference or other statistical deviation as a result of the integration of AI, this would constitute a violation of Title VII unless the employer could establish that the use of the AI is job-related and consistent with business necessity pursuant to Title VII. It is also important to note that this responsibility applies even if the AI selection procedure used by the employer was created and developed by a third party. For this reason, employers should ask these third-party developers what steps they have taken to mitigate the risk of bias and unintentional discrimination by the AI tool.

AI Applied to the ADA

Title I of the ADA prohibits employers with 15 or more employees from discriminating on the basis of an individual’s disability. A disability is generally defined as a physical or mental impairment that substantially limits one or more major life activities. Major life activities include, but are not limited to, seeing, communicating, speaking, concentrating, or operating major bodily functions.

The most common ways that an employer’s use of AI in the preemployment process could violate the ADA include 1) not providing a reasonable accommodation to an applicant who is otherwise entitled to one; 2) screening out an individual with a disability who would otherwise be qualified to perform the essential functions of the job with a reasonable accommodation; and/or 3) making disability-related inquiries and performing medical examinations.

1. Reasonable accommodation under the ADA: A reasonable accommodation under the ADA is a change to the employer’s usual practice that aids a job candidate with a disability in applying for a job. If a reasonable accommodation is requested by a candidate during the preemployment process, employers should act diligently to meet these requests.

However, employers are entitled to ask the candidate requesting a reasonable accommodation for supporting medical documentation if the need for the accommodation is not obvious. If the supporting documentation displays that a candidate’s disability would cause them to be disadvantaged by the AI integrated into an employer’s preemployment process, the employer would then be required to provide the candidate with an alternative testing or assessment tool as a reasonable accommodation. A reasonable accommodation is necessary unless doing so would involve significant difficulty or expense for the employer (also called “undue hardship”).

To gauge whether a reasonable accommodation will be needed in the AI-integrated preemployment process, employers should explain to applicants how they use AI in their selection process. Employers should then inform applicants that reasonable accommodations are available to them if needed and explain to applicants what the process is for requesting such accommodations. Knowledge of these processes will allow candidates to request a reasonable accommodation if they believe the AI will not accurately reflect their capabilities. For example, an employer may inform a candidate that their AI recognizes speech patterns as part of its applicant screening. The candidate may then tell the employer that they have a speech impediment and the AI would not be an accurate measurement of their qualifications or ability to perform the job. Thus, a reasonable accommodation would be required. If an accommodation is requested, the ADA requires employers to keep all medical information obtained in connection with a request for reasonable accommodation confidential and to store all such information separately from the applicant’s personnel file.

2. Screening out disabled individuals: Screening out candidates occurs when these candidates, who are otherwise eligible for a reasonable accommodation, do not receive one and are disadvantaged by the AI-integrated preemployment process. Under the ADA, a screen out is unlawful if the individual who is screened out is able to perform the essential functions of the job with a reasonable accommodation if one is legally required.

As addressed earlier, some AI vendors may claim that their software is bias-free, but this guarantee is often concerned with steps taken to prevent adverse impact or disparate impact under Title VII. Unlike protected characteristics under Title VII, each disability is unique. As a result, an individual with a specific disability may be screened out even though another individual with a different disability is not. In this way, AI-integrated preemployment assessment tools and tests are more susceptible to unintended bias because they are designed to screen applicants under “normal” or unaccommodated working conditions. To avoid this unintended bias, employers should ask their third-party vendor if the AI used is accessible to as many people with disabilities as possible, if the AI can be offered in alternative forms, and if the AI has been tested and the data show that it does not have an adverse effect on individuals with disabilities.

3. Improper inquiries and medical examinations: An employer’s use of AI in the preemployment process might violate the ADA if the AI tool or assessment poses disability-related inquiries or seeks information that qualifies as a medical examination before giving the candidate a conditional offer of employment.

Additionally, a medical examination may pose the risk of a violation under the ADA even if the candidate does not actually have a disability. Disability-related inquiries occur if the AI asks job applicants questions that are likely to elicit information about a disability or directly asks whether a candidate is an individual with a disability. A medical examination occurs if the AI seeks information about a candidate’s general physical or mental impairments, as well as their overall health.

Care Required

Based upon the foregoing, employers must carefully establish how they are going to implement and integrate AI into the preemployment process. Employers cannot wantonly implement AI assessment tools without understanding how these tools were created, what data they were trained on, and the results that they have produced. If it is an employer’s end goal to use AI to screen out candidates, the employer must ensure that the AI tool used complies with both Title VII and the ADA. Ultimately, if a preemployment test has the potential to discriminate against individuals in a protected class, it should be eliminated from the employer’s preemployment process.

Endnotes

1 Kelly, J. (2023, July 7). How Companies Are Hiring and Reportedly Firing With AI. Forbes Magazine. Retrieved from www.forbes.com/sites/jackkelly/2023/11/04/how-companies-are-hiring-and-firing-with-ai .

2 How Artificial Intelligence (AI) in HR Is Changing Hiring (2023, November 15). University of Southern California, USC Annenberg School for Communication and Journalism. Retrieved from https://communicationmgmt.usc.edu/blog/ai-in-hr-how-artificial-intelligence-is-changing-hiring.

3 Eliacik, E. (2022, August 12). AI’s Impact on Recruitment. Dataconomy. Retrieved from https://dataconomy.com/2022/08/12/how-is-artificial-intelligence-changing-the-recruiting-process/.

4 Using AI for Recruiting and Hiring—What Are the Risks and Rewards?, (2023, June 6). USI. Retrieved from www.usi.com/executive-insights/executive-series-articles/supplemental/property-casualty/q2-2023/using-ai-for-recruiting-and-hiring-what-are-the-risks-and-rewards/.

5 Ibid.

6 Ibid.

7 Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, (2023, May 18). U.S. Equal Employment Opportunity Commission. Retrieved from www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used.

8 The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, (2022, May 12). U.S. Equal Employment Opportunity Commission. Retrieved from www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.

9 EEOC, 2023.

10 Ibid.

11 “Selection rate” refers to the proportion of applicants who are hired, promoted, or otherwise selected and is calculated by dividing the number selected from the group by the total number of candidates in that group.

 Joseph M. Milano, Esq., is an associate attorney with Hoffman & Hlavac.