AI didn’t introduce new risks - it amplified old ones


Published: 

Artificial Intelligence (AI) has become the buzzword of the decade. From product launches to marketing campaigns, the term “AI” is being liberally applied, often inaccurately, to anything remotely technical. The fascination is understandable: AI promises efficiency, innovation, and profitability, but with this fascination comes a wave of overcomplication. 

Higher education is no different. Vice-Chancellors are being grilled by their councils, academics are debating its impact on assessment integrity, and IT leaders are being asked if the university is ‘AI ready’.

AI has sparked fresh questions about governance, risk, and accountability however, in many cases, it’s simply made the invisible visible.

The myth of new risk 

Professionals often frame AI as a frontier technology with entirely new risks. This framing is not only misleading, it’s counterproductive. It creates fear, uncertainty, and doubt (FUD) that can paralyse organisations from adopting AI in meaningful ways. 

AI is a model built on data, governed by algorithms, and deployed through software systems. The risks they carry are well-known and well-documented across three primary domains: 

  • Cyber security - AI systems are vulnerable to the same threats as any other digital system, including unauthorised access, data breaches, and adversarial attacks
  • Privacy - AI often processes personal data, raising concerns about consent, data minimisation, and lawful use
  • Data governance - The quality, provenance, and lifecycle of data used in AI models are critical to ensuring fairness, accuracy, and accountability. 

These three risk domains are underpinned by broader business risks, including legal, regulatory compliance and operational risk. While they may appear rebranded in the context of AI, they remain legacy risks at their core. What AI does is amplify their impact, not redefine their nature.

Put simply, AI is not introducing new risks. It is magnifying the risks universities have always struggled with; data protection, system availability, privacy, intellectual property, access control, and governance.

Universities: The perfect storm for data risk

Higher education is already a high-value target for cyber threats as universities hold some of the richest and most varied data sets in the world:

  • Student records (sensitive PII, health data, visa information)
  • Research data (intellectual property, often linked to commercial partners or defence)
  • Financial and payroll systems (increasingly under scrutiny)
  • Critical infrastructure (facilities management, labs, even hospitals in some cases).

The emergence of AI hasn’t changed that fact. What it has done is increase the speed, scale, and subtlety with which data can be exploited if governance is weak.

Speed is the real game-changer 

The true differentiator with AI is velocity. AI systems can process vast amounts of data in real time, make decisions faster than humans, and scale across systems with minimal friction. This acceleration means that any weaknesses in cyber security, privacy, or data governance can be exploited faster and more broadly. 

For example, a poorly governed dataset used to train an AI model can result in biased outcomes. That’s not a new risk; it’s a data quality issue. However, when that model is deployed across a system making decisions, the impact is magnified. The risk isn’t new however; the scale and speed of its consequences are. 

Governance is the real battleground

Universities don’t need to reinvent the wheel or create elaborate new committees in response to AI. Instead, the answer lies in extending their established cyber risk and governance frameworks, ensuring AI is simply integrated as another dimension of familiar, longstanding risks.

This means:

  1. Policy alignment - Extending acceptable use, privacy, and academic integrity policies to explicitly include AI.
  2. AI risk and regulatory frameworks - Adopting recognised approaches such as the EU AI Act, NIST AI Risk Management Framework or ISO/IEC 42001 and adapting them to the university context.
  3. Data controls first - Ensuring strong data classification, access management, and encryption. If the fundamentals aren’t in place, AI will simply magnify the weaknesses.
  4. Visibility of use - Monitoring and managing how AI is being used across teaching, research, and administration, as well as monitoring for signs of overuse.
  5. Research ethics and IP governance - Ensuring research data feeding AI models is treated with the same rigour as clinical trial data or defence research.

The overarching message for university leaders

So, what should Vice-Chancellors, and Technology and Data Leaders take away?

  • Don’t overreact to the hype. The risks are not new. Cyber security in higher education remains about data security, access, governance, and culture.
  • Don’t underreact either. AI governance is not optional. Universities that ignore it risk student trust, research funding, and reputational damage.
  • Stay disciplined. Apply the principles you already know. Embed AI into existing risk management, rather than reinventing the wheel.
Icon of lightbulb

Tip: The winners in this space will be the universities that resist panic, cut through the noise, and quietly but effectively embed AI into their existing governance machinery.


When the boardroom inevitably asks, ‘What are the new AI risks we need to worry about?’ the answer is straightforward: none. The risks are the same. But the consequences are higher.

So, are you ready?

BDO’s approach doesn’t start with fear; it starts with structure. Our AI Risk Methodology and AI Readiness assessments align with recognised frameworks like ISO 42001 and NIST’s AI RMF but goes further by adapting to each client’s unique operating context. This includes current maturity, regulatory exposure, and business objectives.

This methodology is particularly relevant given the findings in our 2025 Global Risk Landscape Report, which identifies the areas where AI is expected to have the most impact over the next 12 months. Domains such as cyber security, compliance monitoring, and predictive analytics align closely with BDO’s emphasis on structured, scalable governance, reinforcing the importance of proactive risk management in these high-impact areas.

How BDO can help

AI hasn’t rewritten the risk playbook; it’s simply turned up the volume. At BDO, we help organisations navigate this amplified landscape by strengthening the foundations that AI builds upon.

We can help assess readiness of the core controls that should be in place now, as well as guide you on your AI journey as you leverage more complex use cases.

Our risk advisory services (RAS) team supports clients in embedding AI risk into broader enterprise risk frameworks. This includes conducting AI risk assessments, aligning governance with regulatory standards, embedding governance into every stage of the AI lifecycle and establishing oversight structures like ethics committees and cross-functional AI teams.

BDO provide a comprehensive approach to AI risk, combining technical enablement with strategic governance to help organisations adopt AI responsibly and confidently.

Key takeaways

AI risk in universities is amplified, not new
  • Artificial Intelligence doesn’t introduce novel risks; it magnifies existing ones, such as cyber security, data governance, and privacy. Universities must recognise that AI simply accelerates the impact of legacy vulnerabilities, making strong governance more critical than ever.
Higher education faces a perfect storm of data risk
  • Universities hold diverse and sensitive datasets, from student records to research IP, making them prime targets for cyber threats. AI increases the speed and subtlety of potential exploitation, underscoring the need for robust data controls and visibility.
Effective AI governance starts with existing frameworks
  • Rather than reinventing governance structures, universities should extend current cyber risk and compliance frameworks to include AI. Aligning with standards like ISO 42001 and NIST AI RMF ensures responsible adoption without falling into hype-driven overreaction.

Read the full article for further information or contact our risk advisory services team to discuss your options.

Subscribe to receive the latest insights.

Authors