Lancaster University Management School - 54 Degrees Issue 22

AI is not some great conspiracy designed to control the world and all its inhabitants but more like a force, neither necessarily good nor bad, that may lead us down a cul-de-sac of collective selfdiscipline of totalising proportions. Take, for example, the performance guidelines for, and monitoring of, delivery drivers that have been described by a UK all-party parliamentary group (APPG) as affecting negatively the mental and physical wellbeing of workers as “they experience the extreme pressure of constant, real-time micro-management and automated assessment”. Or worries that a Stanford economist has about the “Turing trap”, wherein the automation of human activities using brute computational force could leave wealth and power in fewer and fewer hands. Erik Brynjolfsson writes “With that concentration comes the peril of being trapped in an equilibrium in which those without power have no way to improve their outcomes”. More recently, Jeremy Howard, an Artificial Intelligence researcher, introduced ChatGPT to his seven-yearold daughter and – after she asked several questions – concluded that it could become a new kind of personal tutor, teaching her maths, science, English and other important lessons, though he warned her not to believe everything it told her. A TRANSFORMATIVE EFFECT These examples clearly demonstrate the way AI transforms the sense we have of ourselves (our identities), in terms of both our economic and social existence. They indicate how there is a relation of power between AI and users in which the voice of one (AI) exercises authority over the subjectivities of those with whom it interacts. Research by Taina Bucher suggests that because of how AI continually represents our preferences or past ways of behaving, we are beginning to see ourselves through the ‘eyes’ of the algorithm. However, since AI is unable to embody feelings, emotions and a sense of what it is to be human, it remains constrained within a cerebral logic and rationality. While this can capture our concerns to behave rationally in decision-making and physical operations, it is bereft of the soul, affective energy and passion. This is clear from asking ChatGPT the question: Can AI be embodied? While claiming it could, on elaboration, the focus was restricted to the physical body and its relationship to external objects. As well as physicality, embodiment involves the emotions, feelings and spontaneity that allow us a diverse range of expressions including humour, sentiment and empathy that may not be reducible to linear sequences of instructions constructed by an algorithm. Of course, AI reinforces our existing identities insofar as its algorithms are based on data drawn from our past behaviour. In this sense, it affects our emotions, feelings and affective energy even though it cannot itself reproduce them. While there is no question that AI can advance our civilisation for the benefit of us all, and that there is no possibility of reversing its continuing development, we do have to restrain its potential for harm. FIFTY FOUR DEGREES | 13 David Knights is an Emeritus Professor in the Department of Organisation, Work and Technology. His research interests encompass areas including gender, technology, and higher education within the sphere of work and organsations. Dr Guy Huber is a Senior Lecturer at Oxford Brookes Business School. His primary research interests centre on issues of discourse, power, ethics, identity, embodiment, sensemaking, autoethnography and reflexivity. d.knights@lancaster.ac.uk; ghuber@brookes.ac.uk

RkJQdWJsaXNoZXIy NTI5NzM=