Uniac - February 2024

50 Virtual Brochure – February 2024 ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the test, a human evaluator engages in a natural language conversation with two other parties: a human and a machine designed to generate human-like responses. During a text-based conversation, if the evaluator in the diagram below cannot reliably tell the machine from the human (Turing originally suggested that the machine would convince a human 30% of the time after five minutes of conversation), the machine is said to have passed the test. The Turing test is frequently used to measure the progress of artificial intelligence. However, it has been criticised for being too anthropocentric, and for being too easy to game. Nevertheless, it remains a popular measure of artificial intelligence. There are some key pointers that academics can look out for in responses to try and identify the use of AI models: 1. Consistency: AI-generated responses are often highly consistent in terms of style and tone, struggling to mimic the natural syntax and Risks of AI for HEIs Marking the Turing Test ft 3 grammatical flow that may be produced by individuals. They may also repeat certain phrases or use specific vocabulary more frequently than a human would. 2. Unusual responses: AI models may generate responses that are technically correct but sound unusual or unnatural to a human ear. Humans may also detect errors or inconsistencies in the AI generated responses. 8 Gamification is the strategic attempt to enhance systems, services, organisations, and activities by creating similar experiences to those experienced when playing games in order to motivate and engage users. 3. Speed and volume: AI models can generate responses very quickly and in large volumes. If a response is generated almost instantly, or if there are a large number of responses generated in a short period of time, it may suggest that they were generated by AI. 4. Knowledge limitations: AI models may struggle with complex or abstract topics that require nuanced understanding, creativity, or critical thinking. If an answer seems to lack depth or originality, or if it relies heavily on surface-level information, it may suggest that it was generated by AI. Alternatively, providers could adapt their exam strategies, using one or more of the following: - Diagrammatic or visual questions that can’t be used as an input in AI language models. - Live examinations and presentations, enhanced by follow up questions which limit the use of AI. - Ensuring examination focuses on application of learnt knowledge / references to course study rather than regurgitation of information. Some in the sector advocate for further ‘gamification’8 of the higher education experience, but in some respects, this is a double-edged sword. The key aspects of gamification (competition / reward) are what drives students to use tools such as AI language models in the first instance.

RkJQdWJsaXNoZXIy NTI5NzM=