AI Recruiting Tools Aim to Reduce Bias in the Hiring Process
Artificial intelligence software promises to make hiring fairer. But how well does it work?
Two years ago, Amazon reportedly scrapped a secret artificial intelligence hiring tool after realizing that the system had learned to prefer male job candidates while penalizing female applicants—the result of the AI training on resumes that mostly male candidates had submitted to the company. The episode raised concerns over the use of machine learning in hiring software that would perpetuate or even exacerbate existing biases.
Now, with the Black Lives Matter movement spurring new discussions about discrimination and equity issues within the workforce, a number of startups are trying to show that AI-powered recruiting tools can in fact play a positive role in mitigating human bias and help make the hiring process fairer.
These companies claim that, with careful design and training of their AI models, they were able to specifically address various sources of systemic bias in the recruitment pipeline. It’s not a simple task: AI algorithms have a long history of being unfair regarding gender, race, and ethnicity. The strategies adopted by these companies include scrubbing identifying information from applications, relying on anonymous interviews and skillset tests, and even tuning the wording of job postings to attract as diverse a field of candidates as possible.
One of these firms is GapJumpers, which offers a platform for applicants to take “blind auditions” designed to assess job-related skills. The startup, based in San Francisco, uses machine learning to score and rank each candidate without including any personally identifiable information. Co-founder and CEO Kedar Iyer says this methodology helps reduce traditional reliance on resumes, which as a source of training data is “riddled with bias,” and avoids unwittingly replicating and propagating such biases through the scaled-up reach of automated recruiting.
That deliberate approach to reducing discrimination may be encouraging more companies to try AI-assisted recruiting. As the Black Lives Matter movement gained widespread support, GapJumpers saw an uptick in queries from potential clients. “We are seeing increased interest from companies of all sizes to improve their diversity efforts,” Iyers says.
AI with humans in the loop
Another lesson from Amazon’s gender-biased AI is that paying close attention to the design and training of the system is not enough: AI software will almost always require constant human oversight. For developers and recruiters, that means they cannot afford to blindly trust the results of AI-powered tools—they need to understand the processes behind them, how different training data affects their behavior, and monitor for bias.
“One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others,” says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. “You’re talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term.”
Payton and her colleagues highlighted several companies—including GapJumpers—that take an “intentional design justice” approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review.
According to the paper’s authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI’s role in the hiring process, there is a need for humans to have the capability to evaluate the system’s decisions and possibly override them.
“I believe that human-in-the-loop should not be at the end of the recommendation that the algorithms suggest,” Payton says. “Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization.” [READ MORE]
Comments :