Lancaster University Management School - 54 Degrees Issue 25

AI is rapidly reshaping all stages of the recruitment and selection process. The hope is that AI technologies (machine learning models and algorithms) are appropriately and ethically enrolled and configured within organisations to improve the hiring process for recruiters, employers, and candidates. However, problems can stem from how these technologies are paradoxically viewed as both cold rational-objective machines and anthropomorphised bots with humanlike capabilities and ‘personalities’. In anthropomorphising AI technologies, we enter a trap, a fallacy that views the machine as being capable of performing the same role as the human recruiter when it is performing an altogether different function. AI technologies are not a ‘like-for-like’ replacement, but are searching for statistical patterns in the data, while a human recruiter is encultured, socialised and embedded in the world. While relying on machines to perform human social functions is problematic, treating AI technologies as unbiased and objective also requires serious scrutiny. ELIMINATING BIAS OR SUPPLANTING IT? While there are claims that these technologies can ‘debias’ hiring processes, they can also exhibit ‘algorithmic emergent biases.’ These are troublesome, unexpected, and difficult to spot features of how the machine ‘learns’ from the data through probabilistic pattern recognition processes. For instance, machine learning algorithms programmed to objectively rank candidates can amplify preexisting biases. They can ‘learn’ statistical correlations that are dominant in sectors, such as men in STEM, and then actively filter out women applicants having developed a preference for men. Similarly, machine learning models could develop unwanted and unexpected preferences, unbeknownst to the original designers and programmers of the system. For instance, an applicant ranking AI system that is tasked with analysing video/audio recordings of successful candidates who all coincidentally coughed during their interviews, may determine that coughing makes the candidate more suitable for the position and more highly rank those who cough in future! A human interviewer would disregard this seeing it as irrelevant to the candidate’s performance, as it is a natural part of human-to-human interaction. OVERSIGHT REQUIRED Given these potential problems and their impact on job applicants’ careers and lives, AI recruitment should not be treated as a plug-and-play technology: it requires systematic ethical and operational oversight by organisations. Three normative ethical dimensions are regularly invoked as particularly important: fairness, accountability, and transparency. Fairness means ensuring that AI systems operate equitably, avoiding biases and discrimination. Because these systems learn from historical data, they can inherit and even amplify patterns linked to gender, ethnicity, education, or social background, and discover biases that we are not even aware of. 16 |

RkJQdWJsaXNoZXIy NTI5NzM=