Lancaster University Management School - 54 Degrees Issue 25

Organisations therefore need to actively test and monitor their tools across different groups, rather than assuming neutrality. Fairness is not something that can be “built in” once. It requires ongoing ‘red teaming,’ to borrow the approach from cybersecurity, which means rigorously checking, adversarial probing, and questioning the system as it iteratively develops by machine learning and interacts with new candidates in new ways as a recursive feature of how the algorithm develops. Accountability means establishing clear human responsibility for AI systems’ design, implementation, and outcomes; that is, keeping human judgement at the centre throughout the hiring process. It is people within an organisation who choose a specific system, define its goals, and decide how its outputs are used. Decisions, therefore, should not be justified by a simple “the algorithm says NO!”. Recruiters and employers should develop their ‘critical AI literacy’ to allow them to diligently review decisions, intervene where needed, and take ownership of processes and outcomes. Transparency, finally, means designing AI systems to try to make them interpretable and understandable for both employers and candidates, providing clear information on their functionality and decision-making processes. In practice, organisations should give candidates jargon-free explanations of how the system evaluates applicant performance, including what kinds of criteria it uses to assess their responses. Furthermore, they should also ensure there is a clear route for candidates to ask questions or challenge an outcome if something seems wrong. FROM WISHES TO REALITY While these principles of fairness, accountability, and transparency are a useful starting point, we also need to question the degrees to which they can be both meaningfully understood and realistically enacted in organisations, otherwise they risk becoming lofty aspirations – a nice wish list. To deal with the future ethical and practical challenges of AI in recruitment, we need to move towards a ‘disclosive ethics’ approach that is attentive to the concrete and entanglement of human-machine interactions. This is to examine the ways in which the social processes of hiring are reorganised and reconfigured around these emerging and constantly shifting AI technologies. In seeking to reconfigure the social and ethical around the technological, the potential dangers are not only anthropomorphising machines, but that we run the risk of mechanomorphising ourselves and our social practices. AI has already disrupted hiring, with the FT claiming that ‘recruitment is broken’, changing how job seekers find and apply for jobs, and how organisations source, shortlist, and assess applicants. Given this transformation, we are still playing catchup in terms of dealing with both its practical and ethical implications. Computer systems, even those that are labelled as artificial intelligence, are not infallible. FIFTY FOUR DEGREES | 17 Dr Emrah Ali Karakilic is a Lecturer in the Department of Organisation, Work and Technology. Dr Fearnall-Williams contributed to the Better Hiring Institute’s Artificial Intelligence in Hiring report, published in 2025. The guidance is the result of a collaboration with Reed Screening, Arctic Shores, Lancaster University, Tesco, and the TUC. huw.fearnallwilliams@lancaster.ac.uk; e.a.karakilic@lancaster.ac.uk Dr Huw Fearnall-Williams is a Lecturer in the Department of Organisation, Work and Technology.

RkJQdWJsaXNoZXIy NTI5NzM=