Preemployment Testing: The Use of Artificial Intelligence in the Hiring Process

October 1, 2020 | By George C. Hlavac, Esq., and Edward J. Easterly, Esq.

Legal Issues
An illustration of a robot scanning a parade of files.

TAGS: journal, legal issues,

NACE Journal, November 2020

In March 2020, thousands of companies became “remote employers” due to the COVID-19 pandemic. As a result, employers have turned more frequently to the use of technology to operate their businesses. This includes the use of artificial intelligence (AI) in the recruitment and screening of applicants for employment. While there are potential benefits to using AI in the hiring process, there are also legal ramifications that must be understood prior to implementing any AI hiring system.

Context: Laws Pertaining to Preemployment Testing

In order to grasp the potential issues with using AI in the hiring process, an individual must have an understanding of the laws that pertain to preemployment testing in general. In this regard, there are certain laws that pertain to preemployment testing that would also apply to the use of AI in the screening process.

The first law that must be addressed is the Americans With Disabilities Act (ADA). The ADA restricts the ability of employers to require job applicants to undergo medical examinations or answer questions about medical conditions. There are different restrictions that apply to employers before and after a conditional offer of employment is extended to an applicant.

Courts and the Equal Employment Opportunity Commission (EEOC) have stated a test or procedure is more likely to be considered a medical examination under the ADA if it is administered by a healthcare professional; is interpreted by a healthcare professional; is designed to reveal an impairment to physical or mental health; measures physiological responses when performing a task; is normally given in a medical setting; or is performed using medical equipment. Medical examinations include, but are not limited to, psychological tests attempting to identify a mental disorder or impairment.

Courts and the EEOC have further stated that certain procedures and tests do not constitute medical examinations for purposes of the ADA. For example, physical agility or fitness tests, typing or computer tests, and psychological tests that measure personality traits or habits are not considered to be medical tests under the ADA. (Note: If a personality test is administered by a medical professional, it may be deemed a medical exam and, therefore, prohibited by the ADA until after an offer of employment has been made. In addition, even tests that are not administered by a medical professional have come under scrutiny because such tests have been deemed to allow an employer to impermissibly determine whether an individual has a mental disorder. As a practical matter, a personality test may be deemed legal as long as it is designed to address job-related traits and abilities; is applied fairly, that is, all applying for the position receive the same test; is reliable, meaning that it consistently measures an individual's traits; and does not screen out a protected class of individuals.)

Additionally, employers are permitted to require applicants to perform cognitive tests to measure intelligence; aptitude tests to measure an applicant’s ability to learn a new skill; and physical ability tests to measure strength, endurance, and muscular movement, again assuming these are related to job function and are required of all applicants for the job.

There are laws beyond the ADA that provide protections to applicants during the hiring process. These include the Civil Rights Act of 1964 (Title VII), which prohibits discrimination against applicants on the basis of certain protected criteria, e.g., race, gender, religion, national origin, and more; and the Age Discrimination in Employment Act, which protects individuals who are over the age of 40 from discrimination by employers.

Finally and importantly, according to the EEOC, if an employer intends to use preemployment testing, the test must fairly measure the knowledge or skills required by the particular job or class of jobs, or afford the employer a chance to measure the applicant’s ability to perform a particular job or class of jobs. Before using a preemployment test, an employer is required to consider whether the test is truly necessary for evaluation of candidates and whether the test accurately measures the particular skill in question. A test also may not discriminate against a class of individuals. Accordingly, before determining that a test is necessary, an employer must ensure that the test is valid, job related, and uniformly applied to all applicants, and determine how it will be used in the hiring process.

AI in the Recruiting Process

Now that a sufficient background has been established, the legal ramifications of using artificial intelligence in the hiring process can be addressed.

Employers use AI in the hiring process in an effort to more efficiently screen applicants. Pre-COVID, many employers were turning to AI in response to the significant number of applicants per job. That problem is highlighted further today as a result of the high levels of unemployment and the number of individuals looking for open and available positions. Without the use of AI in the hiring process, someone has to review the applications or resumes submitted to determine if someone is “right” for a position. Obviously, the time required to do so is a key issue. A more significant issue, however, is the potential for bias in the reviewer. For example, a human reviewing an application may see certain information on the document—such as address, school, name, or some other item—and disqualify the applicant on that basis without further review. Such actions may constitute discrimination based upon the reasons for such a determination.

Employers also use AI to examine personality traits, search social media, or allow the applicants to complete tests or questions, all functions that would generally have to be done in person or through personal interactions. Using AI for these functions reduces the screening time; theoretically, it would eliminate any potential for bias in the hiring process.

The use of AI, however, does not come without its own inherent risks. As an initial matter, what types of tests or screening techniques are performed using AI? The laws set forth with regard to preemployment testing do not disappear merely because it is being done by AI. If an AI test determines, or has the ability to determine, either intentionally or unintentionally, an individual’s disability, it would be in violation of the ADA. One situation where this would occur is through cognitive testing or screening done by AI that identifies a mental or physical disability. If the AI has the ability to determine such information and disqualifies an applicant as a result, it could result in a cause of action for disability discrimination.

As with any other preemployment testing process, an employer would need to establish the business reason for requiring an AI-performed test of applicants. An employer cannot merely believe that it is beneficial: The employer must articulate a reason for a test’s inclusion in the hiring process. This should be documented by an employer to defend against any potential claims down the road.

Disparate Impact

Another potential issue, which has already occurred in at least one situation, would arise if the AI disparately treats a certain class of individuals.

If a screening mechanism used by an employer is discriminatory against a class of individuals, it can give rise to a cause of action. Employers had previously been subjected to claims in this manner when a physical requirement—such as height, for example—was allegedly meant to preclude a certain gender. The use of AI can have similar results.

If an AI program disproportionately excludes a certain class of individuals, it can lead to a disparate impact claim. (For example, Amazon tried to develop an AI tool for screening applicants, but found that it was disproportionately screening out female applicants.1)

Similar issues can arise with the use of AI in the hiring process. Depending on how the program is created, it can access an individual’s social media, review personality traits or speech patterns, or review other information not generally accessible during the preemployment process. For example, if an AI program is designed to review social media information and personality traits, it may determine the race or religion of an individual and screen that applicant out based upon its unconscious bias. The unconscious bias of the AI would be predicated on how it was created, i.e., its core assumptions may be unintentionally biased. Remember that AI is not created in a bubble. It is created, ultimately, by humans. If the program is created in a manner that is discriminatory or the information uploaded creates a bias, it is no different than if there was a human sitting on the other end of the screening process. Accordingly, if the AI program is created with unconscious bias issues present, it will ultimately screen applicants in a potentially discriminatory manner.

Based upon the foregoing, what can employers do, and what should applicants know? Employers must carefully establish how they are going to use AI in the hiring process. Employers cannot merely implement AI screening processes or programs without understanding fully how such processes or programs are being used or the manner in which they were created. Employers must be aware of what information is being used by the program and how it is screening out applicants. There must be a thorough screening of the AI before it is used to screen applicants.

If AI will be conducting any tests, employers have to comply with the applicable preemployment testing laws. If a test has the ability to determine a physical or mental disability, it should be eliminated. Employers should document any and all decision-making with regard to the AI process to ensure that they are in compliance with applicable laws.

Employers have to remain aware of the ever-changing legal landscape with regard to the use of AI in the employment process. While there are no federal laws directly related to AI beyond those set forth in this article, states have begun adopting AI-specific laws. Illinois, for example, recently enacted a law that requires employers—at a minimum—to explain to potential applicants what characteristics will be used by the AI and how the AI works and obtain consent from an applicant to use AI in the hiring process. Given that other states have similar pending laws, this would appear to be the trend and not an aberration.

Applicants must be aware of their rights under the law. If an employer is using AI in the screening process, the questions asked must be done in a manner consistent with the laws. Just because it is a computer asking the questions does not mean that an individual has waived their right to be free from discrimination.

Given the world we live in, the use of technology in the hiring process is not only likely to stay but also to grow. That technology, however, must be used in accordance with existing laws as well as those that will surely arise in response. If not, employers are likely to face the same type of claims that they would have otherwise faced with a human screening process.


1 Dastin, J. (October 19, 2018). “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters. Retrieved from

blank default headshot of a user George C. Hlavac, Esquire, and Edward J. Easterly, Esquire, are attorneys in the Labor and Employment Law Department at Norris, McLaughlin & Marcus, P.A.