The Future of AI in Recruiting: Eliminating Bias and Hiring the Perfect Candidate

By: Dr. Rod Fontecilla

A company’s success is highly dependent on its largest investment – its people. Organizations that have a true representation of diverse cultures, genders, races, and viewpoints benefit from a wide range of perspectives.

Despite striving for greater diversity, however, many technology companies are showing minimal progress. In part, this is due to hiring biases that keep organizations from developing diverse teams and providing opportunities to women, people of color, and other deserving candidates.

And yet, AI has the potential to significantly reduce the amount of bias in the recruitment process. This is great news for the technology industry. By focusing exclusively on data – a person’s past expertise, or proven track record of accomplishments – and not more general “defining” personal characteristics that might come across in a resume, an AI engine can provide hiring managers with a clear, unbiased assessment of each candidate. Managers can use this information to make data-driven decisions about candidates, rather than be clouded by subconscious beliefs about social and identity groups.

But despite developments in AI-assisted recruitment, there is still a long way to go.

Man-made algorithms remain fallible

While AI has made huge advances in removing prejudice or judgment from candidate selection, the data used to create the algorithms that screen individuals may not be entirely bias-free.

Traditionally, an Applicant Tracking System (ATS) selects and prioritizes key words or skillsets for each open position. These keywords are incorporated into the algorithm and when a match occurs, a candidate is ranked based on how well their resume scores or matches against the job description.

Yet, too often, these keyword choices can work against any efforts to remove bias. In her article, Here’s How Your Word Choices Could Affect Hiring Gender-Diverse Talent, Sarah O’Brien cites research from LinkedIn that the use of terms such as “aggressive” – which is used in over 50,000 job descriptions on the platform – could discourage almost half of women from applying for a job.

Qualified candidates are still slipping through the cracks

Diversity isn’t the only problem that many of today’s recruitment algorithms are struggling to solve. The technology may also be missing qualified candidates. For example, words like “significant,” “strong,” and “mastery” in a job description can be clues that those skills will be weighted heavily and should be emphasized in a resume. But a great applicant may easily slip through the cracks because the keywords they use on their resume don’t match those used by the hiring company’s description.

AI also tends to look at hard skill sets and other data points and may miss someone who has more a more of a well-rounded match for the role.  

Because of this, AI should not be used in a vacuum. Companies still need to rely on the instincts and recommendations of recruiters and hiring managers to better assess the “soft skills” that might be required to land a job. In a truly effective scenario, AI is used to build a solid, bias-free candidate list that is then handed off to the hiring manager, who then uses their own “people skills” to make the final determination on who will be right for the role.

Of course, there’s still the chance that bias may be introduced during the interview stage. But by filtering out candidates in a non-biased way during the first stage of the recruitment process, organizations will have overcome that initial hurdle of identifying a talent pool without succumbing to any inclination to judge a candidate based on their name, where they’re from, or what school they attended.

AI in recruitment is here to stay – here’s how we can make it better

As organizations seek to level the playing field and eliminate bias in recruitment while finding the proverbial “needle in the haystack,” how can they work to overcome some of the potential drawbacks of using AI in the recruiting process?

It’s important to understand that AI does not replace humans. Indeed, the challenge of leveraging technology and data to streamline the recruitment process and remove implicit bias is very much a human skills process. 

To really transform recruitment with AI, data scientists, hiring managers, and recruiters must step out of their respective organizational silos and collaborate to ensure that the right candidate details are prioritized and candidate scoring criteria is free of bias. Only with this due diligence can data scientists develop algorithms that filter a pool of qualified candidates – without prejudice or bias.

Once organizations get to this point, they can begin to train their algorithms to incorporate other dimensions such as the ability to evaluate soft skill sets.

AI and data-driven recruitment is smart and is getting smarter. But future efforts are going to require an ongoing commitment to collaboration and innovation to achieve a desired state where no application is overlooked because of hidden biases or an incomplete picture of the candidate.

About the author: Dr. Rod Fontecilla, is Dovel’s Chief Innovation Officer and Chief Data Scientist. Dr. Fontecilla directs strategic research and development efforts, advances initiatives to expand corporate technical competencies and capabilities, and fosters the development of innovative solutions for current and prospective customers of Dovel. He most recently served as Senior Vice President and CTO for Altamira Technologies where he led the creation of solutions using Advanced Data Analytics, Synthetic Data Generation, Computer Vision, and Robotic Processing Automation in support of National Security agencies. He has held technical leadership roles, focused on software engineering, cloud integration, cyber security, and data analytics, at Unisys, Sevatec, and Booz Allen Hamilton. Dr. Fontecilla serves as an adjunct professor at American University, Kogod School of Business. He earned his Ph.D in Applied Mathematics from Rice University.