AI in the Hiring Process: 3 Ways to Avoid Unconscious Bias

For over 30 years, HR managers have been increasingly automating the hiring process to make it more efficient and less time-consuming. With the hundreds of applications that are elicited from a single job post, it can be a very useful tool to use algorithms to weed through resumes and applications to find the right fit for the job. However, this process can also perpetuate an existing problem - lack of diversity and equal opportunity for job applicants. 

Even as software has been engineered in an attempt to eliminate bias in the hiring process, such as removing names and locations, there still exists a problem of bias in the artificial intelligence machine learning process that allows human biases to proliferate. These problems need to be identified and remediated to make the hiring process fair and equitable.

What is AI bias?

Time lapse photography of blue lights

Photo by Pixabay from Pexels

Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. In this case, the computer would take the place of a human resources professional in reviewing and selecting candidates for employment. The computer relies on training data provided by human beings to learn. This data is either fed to the computer to teach it what to do, or it is derived from tracked behaviors while performing the duties for which it was built. Both of these methods can introduce bias to the algorithm unintentionally. 

What’s the harm?

Since the purpose of AI in the hiring process is to allow it to make initial, tedious decisions for the user, the tendency is to accept those decisions without question. This allows biases introduced by engineers or incomplete data to affect outcomes in a multitude of ways. For instance, hiring platforms like ZipRecruiter use AI to learn the company or recruiter’s interactions with certain types of candidates and solicit jobs specifically to people with those characteristics. Any implicit or explicit biases the human hiring managers may have had in their selection process are passed along to the AI, impacting future recommendations. Over time, there is a very narrow pool from which to choose that is likely discriminatory to certain backgrounds, races, and genders, etc. 

Algorithms that search resumes and applications for keywords entered by the user (training data) have also resulted in the unfair elimination of qualified candidates. Amazon discontinued its use of an algorithm that was eliminating women candidates because of a preference for keyword usage more common among men. Luckily, they noticed the bias and reacted to remedy the harm caused. It takes proactive monitoring of the AI’s decision-making to discover potential harm being done. Many companies using third-party tools trust the outcomes without monitoring. 

How to avoid AI bias

  1. Train those involved in hiring on unconscious bias and common ways that it can influence the AI tools used throughout the process. When using a tool like an Applicant Tracking System (ATS), take time to research keywords that won’t eliminate certain groups of people unfairly. Use a diverse group of people to research and decide on a set of keywords for each job so that these mistakes are reduced or eliminated. 

  2. Put more emphasis on correctly advertising your position rather than allowing online software to recommend candidates to you. Know that these recommendations may be based on biases introduced to the AI via past interactions with it and that candidates will also have your jobs recommended to them. To ensure these candidates are proper matches without unconscious bias, make sure you spend careful time writing the job description without biased language. 

  3. Look for undesired patterns in each step of the process and question the use of AI that does not result in a diverse pool of candidates, and ultimately, workforce. Investigate how the AI “learns” and how it makes decisions. Occasionally do a manual audit of the ATS ranking system to ensure it is not unfairly ranking certain candidates lower than others.

Remember, AI is not infallible and while it can be used to reduce bias in the hiring process, it can also unintentionally add to the problem. It is important to properly vet each product used in your hiring process and not have any assumptions regarding its ability to help you build a diverse and equitable workforce.

Jenna Mars

Edtech software development product professional with over 12 years' experience in education, data analysis, employee engagement, and DEI learning and development.

Previous
Previous

Avoiding Unconscious Bias in the Workplace

Next
Next

Three Trends Shaping the Future of Work