Uniac - February 2024

49 Virtual Brochure – February 2024 on their individual needs and interests. This can help students to learn more effectively. 2. Automated tasks: AI language models can be used to automate tasks such as grading papers, providing customer service, and providing feedback to students. This can free up staff time for other tasks. 3. Research: AI language models can be used to conduct research, such as analysing large amounts of data or by generating new hypotheses. This helps providers to advance knowledge and understanding. 4. Innovation: AI language models can be used to develop new products and services, for example by generating new ideas or by creating new ways to communicate. This can help providers to stay ahead of the curve and to remain competitive. 5. Enhanced communication: AI language models can be used to enhance communication between providers and their stakeholders, for example by providing real-time updates or by generating personalised messages. This can help to build trust and relationships. Risks of AI There are also some risks associated with the use of AI language models for providers, including: 1. Data privacy and security: AI language models are trained on massive amounts of data, which could pose privacy and security risks if that data is accessed by unauthorised individuals. 2. Bias: AI language models are trained on data that is biased, which could lead to the generation of biased text (for example, GPT-3 is based on textual data taken from the internet, where progressive viewpoints are often more vocally represented, so these viewpoints may reappear within the responses produced). 3. Unethical use: AI language models could be used for unethical purposes, such as generating fake news, propaganda, or academic misconduct. 4. Misuse: AI language models could be misused, for example by generating hate speech or by creating deepfakes. It is important for providers to be aware of these risks and to take steps to mitigate them. For example, providers can: - Use AI language models in a secure environment. - Train AI language models on data that is not biased. - Monitor the use of AI language models to identify any potential risks. - Develop ethical guidelines for the use of AI language models. - Educate staff and students about the risks of AI language models. Marking the Turing test One of the common current concerns in the sector is that academics marking essays, will now need to ‘mark’ the Turing test, and assess whether AI language models have been used to generate a response. The Turing test is a test of a machine’s

RkJQdWJsaXNoZXIy NTI5NzM=