Policybrief

Artificial Intelligence in Hiring

Here you can find the Policybrief in full length!

As a powerful predictive technology, Artificial Intelligence (AI) is being used in more and more companies and within various processes. This growing trend in application requires that we explore the impact of AI use. In the second policy brief, the ai:conomics research team looks at the use of AI in the hiring process in the context of discrimination and demonstrates: AI can reduce discrimination in hiring processes and recruiting by increasing diversity through objective decisions - but this is not always the case in practice. For technology to fulfil this potential, it needs the right algorithm design, transparency and trust towards the technology.

Discrimination in hiring: a serious problem

Discrimination in hiring is a long-studied phenomenon that is often based on people's unconscious interpersonal biases based on different personal characteristics (so-called unconscious bias).
As such, it not only influences the specific hiring process, but also contributes to inequalities in the labour market in a larger context. This is because people from overrepresented groups get better access to attractive job opportunities, they are more likely to be invited for interview and more likely to receive/ more job offers. This preference limits the access of other groups.
This also hurts companies, as this phenomenon leads to skills mismatches and inefficient allocation of resources.
Research has repeatedly found that recruitment procedures often involve discrimination based on personal characteristics that in reality have nothing to do with their productivity. Such potential characteristics include gender or nationality inferred from the name or appearance of applicants.

Can AI reduce discrimination?

AI technology has no natural preferences for hiring candidates with a certain appearance, a specific name that indicates nationality, or a degree from a certain college, among others. In addition, intelligent hiring algorithms draw on large data sets from multiple data sources, such as CVs, interviews and social media, so that they can predict which candidates can best fill the vacancy. Studies suggest that this often outperforms the insights of subject matter experts. For this reason, technology has great potential as a tool to reduce discrimination in hiring.
AI can increase the diversity of successful candidates whilst also leading to better performing candidates overall.

Practice shows: Not every AI is unbiased

Current AI algorithms, so-called 'deep learning' or 'machine learning' learning methods, depend heavily on human-generated training data and can thus reproduce bias - especially when it comes to historical data. Tech giant Amazon serves a prime example in this scenario: In 2018, the company discovered that its AI-powered recruitment system was based on historical job performance data, which was severely male-dominated, and in return, contained higher performance scores for white men. As a result, trained on that selection of information, the algorithm gave higher scores to white male applicants, while it was selecting out women and candidates with attributes associated with women.
This example clearly shows that the results of AI-based recruitment largely depend on the specific algorithm design. In certain circumstances, AI can further exacerbate labour discrimination by reproducing human biases and unfair outcomes at the expense of certain groups of people.
This requires to act cautiously. Whether this be in curating the training data used by the AI, labelling this data or labelling outcomes, bias can be transferred from humans to AI.

It takes the right algorithm design, trust and transparency.

Studies show that the use of AI in the recruitment process can raise doubts among applicants about the fairness of the process. Applicants may see the decisions made by the algorithm as more unfair than those made by humans - regardless of whether they are actually less, equally or even more fair than human decisions. So they may not apply for or accept a job. If they accept the job despite such concerns, trust in the employment relationship may be disturbed. This may lead, for example, to insufficiently qualified people entering the application process and staff turnover may increase. Consequently, workers' perceptions of the fairness of AI may have a stronger influence on the recruitment process and self-selection of workers than the actual fairness of AI. Workforce perceptions therefore become a critical factor when making informed decisions about algorithm designs and their ethical implementation policies.
These scientific findings clearly show: transparency is needed wherever AI is applied. People tend to distrust the unknown, so transparency and openness are crucial to create greater acceptance of and participation in algorithmic hiring practices.

As technologies do not evolve deterministically, Artificial Intelligence (AI) will not instinctively evolve towards more or less discrimination. As in any technological framework, there are many decisions that need to be made in terms of design and application. Carefully made decisions, documented in detail, and made available to the people concerned, make AI more transparent and explainable. This is a basic prerequisite for AI to develop its full potential in hiring processes.

Results

News

Get notified! We publish new findings and insights on an ongoing basis.

Subscribe Newsletter
Shapes background

Partners