Organisations continue to face the inherent challenges of a competitive job market; identifying and retaining top talent is a pivotal exercise for all modern companies. It is not unsurprising with this backdrop that in the advent of predictive analytics powered by artificial intelligence (AI) and machine learning, many larger firms have actively implemented technology that enables data-driven hiring decisions that promises to optimise candidate success and retention. As a professional recruiter I caution this topic with large dose of hesitancy.
It is not unsurprising with this backdrop that in the advent of predictive analytics powered by artificial intelligence (AI) and machine learning, many larger firms have actively implemented technology that enables data-driven hiring decisions that promises to optimise candidate success and retention. As a professional recruiter I caution this topic with large dose of hesitancy.
The theory is: traditional recruitment methods often rely on subjective assessments and gut instincts. It is said that this can lead to inefficiencies and mismatches between the candidate appointed, their respective job role, and cultural fit. The promise is that predictive analytics powered by artificial intelligence reduce these subjectivities. By leveraging vast amounts of historic data (related to candidates, employers, and job functions) they can identify correlations and trends that indicate probabilities of a given candidates’ success and retention – and thus lead the hiring party to make talent acquisition decisions based upon a highly data-driven model.
Whilst there are some notable opportunities in using AI to aid a recruitment process (efficiency, diversity, accuracy, and strategic talent management) where the use of technology is ill planned and executed without sufficient finesse, sensitivity, and temperance, I believe that there are significant long term negative consequences related to the introduction of bias and consequently, hiring discrimination. This has become a topic of debate with much anecdotical evidence demonstrating that the adoption of algorithmic tools has caused both inefficiency and in many cases taking what should be a human experience of hiring a future employee and turning it into what can be likened to an experience on a dating website. This is in its most optimistic sense highly impersonal, and at its extreme simply immoral.
There is a real risk of systematic and unfair ‘favoritism’ or prejudice that can (and does) influence algorithmic decision-making in talent acquisition initiatives. These biases can manifest in various forms, however, here are a few to consider:
Historical Bias: AI algorithms are trained on historical data that may perpetuate and, in fact, magnify existing biases present in a company’s hiring decisions. If historical data reflects systemic inequalities in former recruitment practices, AI models will probably reinforce these biases.
Data Bias: Biases can also arise from the quality and representativeness of the data used to train AI algorithms. If training data is skewed or unrepresentative of the diverse population of job seekers, algorithms may produce biased outcomes that disproportionately disadvantage (or by extension provide advantage to) certain demographic groups. Consider, when applied to its fullest level of execution, AI will preselect without compromise only according to its trained criteria – there are no opportunities for a qualitative decision; nor are there likely the means for a job seeker to challenge a decision executed.
Algorithmic Bias: Due to the complexity of modeling human behavior and preferences, the programming of algorithmic code can in fact become invertedly bias. Factors such as feature selection, weighting, and decision thresholds can introduce unintended biases into AI driven decision-making processes.
Feedback Loop Bias: Biased outcomes generated by AI algorithms may perpetuate feedback loops that reinforce inequalities in a firm’s talent acquisition processes. For example, if certain demographic groups are consistently overlooked or rejected by AI-driven tools they may be perpetuated in the algorithm of selection.
AI has the potential to enhance talent acquisition by improving efficiency and facilitating data driven matches between candidates their job function and organisation. However, the potential for inhuman decision making can be very unfair.
Organisations would be wise to address concerns related to bias and thus potential discrimination. There are further various considerations surrounding data privacy and transparency that need to be carefully monitored to ensure the ethical and responsible use of AI in recruitment initiatives. Predictive models require ongoing refinement and validation to remain accurate. User companies should therefore establish mechanisms for ongoing monitoring and evaluation of AI talent acquisition systems to detect and address emerging biases or disparities. Diversity is king for many organisations, let’s ensure that this is not neglected.
First published in AGEFI May 2024
Courtney Charlton