AI & HR: The Risks of Using Artificial Intelligence in the Hiring Process
By: Vincent Fisher
Artificial intelligence (AI), robotics, and other emergent technologies stand to profoundly impact employers and, indeed, society itself. According to some estimates, by 2025, half of all U.S. jobs will either be automated or augmented by AI, or will have initiated steps to move in that direction. As AI becomes infused in workplaces, there is the potential for AI to re-define what “work” means. Since the “work” each of us performs helps inform the social order, this means AI may well be poised to transform society.
AI today is actually relatively modest, and it is rife with apparent contradictions. AI is anticipated to cause economies to grow and simultaneously cause jobs that are ubiquitous today to disappear. Not unlike humans, AI can be brilliant one moment, such as recognizing new galaxies in deep space, yet terribly inept the next, such as its inability in one experiment to distinguish between a turtle from a rifle.
The “Flynn Effect,” named for intelligence researcher James Flynn, is the supposition that human IQ scores renormalize every decade with an increase, on average, of three points per decade. Based on a standardized score of 100 IQ points, the Flynn Effect suggests that an average person today, if he or she could time travel back to 1910, would have a relative IQ of 130, a score higher than 98 percent of the population in 1910. Put simply, we all are (or should be) smarter today than ten years ago. But the Flynn Effect flat-lined or regressed beginning in the 1990s, when the Internet came to prominence. Significantly, the ongoing reversal of the Flynn Effect has been most acutely observed in technologically advanced, first-world nations with robust education and social welfare systems, such as Norway, France, and Britain.
A more pressing concern for employers, however, is the notion that, although AI has the salutary goal of eliminating bias in employment matters, such as hiring, in reality the results suggest the opposite may occur.
In October 2018, Amazon had the unfortunate distinction of making the news for reports that it abandoned an experiment in which it built an AI-augmented recruiting engine, which was designed to automate the review of job applicants’ resumes and identify top-tier talent. A year into the experiment, Amazon realized its program was not identifying qualified applicants in a gender-neutral manner. The algorithm downgraded resumes by applicants who attended certain women’s colleges and resumes that included the word “women’s” (e.g. “women’s soccer”). This defect resulted because AI trainers fed the algorithm data, including resumes, from the company’s then-existing population of software engineers, which was mostly male. Based on this data, the AI recruiter captured what it thought was Amazon’s preferred demographic and tended to jettison or downgrade other, mainly female, applicants. Restated, the algorithm latched on to human implicit bias, rather than focusing on job requirements or qualifications, a theme which permeates employment discrimination cases.
Amazon’s lesson speaks to the modern core defect in AI-augmented programs – namely, that such systems are programmed by humans, who tend to impute their biases into the algorithms. Consider, for example, that Alexa, Cortana, and Siri (in the U.S.), arguably the three most common AI/voice-controlled interfaces, all have female voices. Is this because consumers simply prefer a female voice to guide them, or does it reflect and reinforce some sort of subservient, gender-based view of women? (Is it unclear the degree to which men fare better: on the one hand, IBM’s Watson AI, which speaks with physicians on cancer treatments and won Jeopardy, speaks with a male voice; on the other, perhaps the most famous male AI voice is HAL, the homicidal computer from 2001: A Space Odyssey.)
Although Amazon abandoned its effort, other similar types of AI programs are in use throughout American workplaces. Recruiting programs that prefer something as innocuous as applicants’ zip codes that are closer to the workplace could have a disparate impact on ethnicities that may group together in non-favored zip codes. By way of further example, algorithms that monitor employee attendance could, if not properly programmed, identify certain populations of employees who require ongoing medical care (e.g., those with diabetes), as “high risk” and place them at increase risk for scrutiny. A company policy that ensures a “human in the loop” at all stages of potential adverse employment actions can help mitigate the risks.
Employers wishing to implement, or that have already implemented, AI systems must be aware of the risks that accompany the very real rewards of AI. Moreover, employers must be aware that legislatures, administrative agencies, and the plaintiffs’ bar are taking note.
In February 2019, President Donald Trump issued an executive order that set forth a broad strategy for the United States to establish itself as the global leader in AI. The Congressional Artificial Intelligence Caucus has also been formed to address AI’s rapid development. In 2016, the Equal Opportunity Commission began to study the impact that algorithms have on equal employment. Also in 2016, the American Civil Liberties Union filed a lawsuit on behalf of academic researchers, computer scientists, and journalists seeking to investigate online companies, websites, and other platforms by examining potential discriminatory effects of the use of algorithms. See Sandvig v. Sessions (Case No. 1:16-cv-01368 (JDB) (D.D.C.)).
An AI-system that does not sufficiently account for legal variables runs the risk of repeating biases and doing so at scale. The use of a common algorithm to engage with identifiable populations of applicants and employees can, without proper legal guidance and defense, serve as a significant threat vector, including the likelihood of a named plaintiff’s certifying a class in a class action employment discrimination case.