11 Virtual Brochure – March 2025 Commentary on significant and increasing risks 1.3 Digital disruption, new technology and AI ppt compared to 2023 The growing understanding of the risks and opportunities from digital disruption, particularly AI, have resulted in this becoming the fastest rising strategic risk area in the 2025 survey. Respondents said they expect the risks from AI to continue to increase and become the second most significant strategic risk to their business by 2028. The ECIIA report comments that the speed of integration of generative AI tools into everyday software applications and their rapid adoption by the general population means that all businesses need to have a clear strategy about how they intend to use AI, now and in the longer term. Without an agreed strategy, effective governance and controlled change management there are risks of fragmented, insecure, and inefficient adoption of AI tools. The speed of uptake has been such that some businesses have had to retrospectively map where AI tools are being used to identify and manage risks, while some have prohibited the use of generative AI until reliability and security can be more effectively evaluated. Businesses also need to consider how AI use in the wider environment could impact them and how risks will be managed. Considerations for HE When it comes to AI, institutions have an advantage in that the UK has adopted principles-based non-statutory guidance around the use of AI rather than the legislative and regulatory approach adopted in the EU. As noted in last year’s report, AI has the potential to automate, personalise and analyse a huge array of activities supporting education, research and professional services. However, to date there remains little recognition of the risks from AI or other disruptive technologies in institutional strategic risk registers. While institutions are developing their thinking about strategic opportunities, use cases, security, governance and training, much of the risk focus to date has been on preventing mis-use of AI by students for assessment purposes. We encourage institutions to consider: - How effectively governed and coordinated are decisions about the use of AI and how is change managed and benefits assessed? - How do you gain sufficient assurance about responsible and ethical use of AI? - Where and how are AI risks (both opportunities and threats) assessed and owned? Does risk assessment address the external threats and opportunities across suppliers, partners and competitors as well as internal adoption and use risks? - How are needs to upskill staff, governors and students to use AI tools safely, responsibly and ethically being identified and addressed?
RkJQdWJsaXNoZXIy NTI5NzM=