A strategic approach to managing cyber-related AI risks
A strategic approach to managing cyber-related AI risks
Generative AI has accelerated far beyond expert predictions, compressing decades of anticipated progress into just a few years. According to our global research, 62 per cent of business leaders expect AI to be embedded across all areas of their organisations by 2028. Yet, as revealed in the BDO-sponsored IDC Security Survey, nearly half of organisations are already deploying AI-specific security tools, showing a growing concern around cyber security, privacy, and governance risks.
We’re at a pivotal moment where AI is now in everyone’s hands. In our latest global research on technology disruption, cyber security and data risk ranked as the number one concern for leaders in 2025. While AI adoption is expected to become pervasive by 2028 (with 70 per cent of Australian leaders agreeing), only 55 per cent of leaders globally identify their organisations as prepared to manage the cyber security and data risks that come with this shift. This gap highlights that many organisations are accelerating AI adoption without fully addressing the security and data risks that could undermine trust, resilience, and business performance.
AI’s risks and challenges
Generative AI is amplifying many of the cyber security and data risks Australian organisations have faced for years, including social engineering, data leakage, governance gaps, and weak oversight of automated decision-making. Nearly half of the surveyed leaders report that data security concerns are holding back further investment in AI.
The risks are real and far-reaching, as demonstrated by Australia’s Robodebt scheme, an automated government program that wrongly pursued thousands of welfare recipients for debts they did not owe. The fallout included financial hardship, emotional distress, and ultimately a Royal Commission. Although Robodebt did not involve modern AI technologies such as machine learning or adaptive intelligence, it offers important lessons on the dangers of poorly governed automated decision-making. The case underscores the legal, ethical, and human risks that can emerge when AI is used without appropriate oversight and accountability.
Broader challenges identified in our global report include expanding attack surfaces, increased exposure to data breaches, and ethical concerns such as bias and over-automation. As AI technologies continue to evolve, often outpacing regulatory frameworks, Australia is responding through reforms to the Privacy Act 1988, the introduction of the Digital ID Act 2024, and proposed guardrails for high-risk AI use. These developments signal a growing expectation that organisations will take a proactive, risk-based approach to securing and governing AI systems.
Building a cohesive AI governance strategy
To keep pace with technology and associated legislation, Australian organisations should establish cross-functional oversight and structured governance that keeps pace with both technology and regulation. A cohesive AI governance strategy includes:
- Mapping Australian and global regulations to maintain compliance across jurisdictions (e.g. Privacy Act 1988, SOCI Act 2018, OAIC guidance, ISO/IEC 42005:2025)
- Aligning AI initiatives with organisational priorities, including customer trust, operational efficiency, risk resilience and innovation
- Embedding AI into enterprise-wide strategies to avoid fragmented deployments and reduce siloed risk exposure, ensuring investment directly supports strategic outcomes
- Integrating cyber security and data safeguards throughout the AI lifecycle to protect against threats such as data leakage, social engineering, and model exploitation.
AI should not sit outside existing governance structures, it should be integrated into enterprise risk, compliance, and cybersecurity frameworks, with clear accountability and transparency built in. The Office of the Australian Information Commissioner (OAIC) recommends conducting Privacy Impact Assessments (PIAs) for AI systems and ensuring transparency, explainability, and appropriate human oversight in how these technologies are developed and used. These measures are essential not only for meeting regulatory obligations but also for building and maintaining data trust, ensuring that individuals and stakeholders have confidence in how their information is used, protected, and governed.
We recommend having an AI strategy, a designated AI lead, and a governance committee with cross-functional representation, ensuring AI oversight isn’t a single point of failure but a structured, strategic capability that strengthens trust, compliance, and resilience across the organisation.
Practical steps for Australian organisations
According to BDO’s Global Risk Landscape 2025 report, AI is expected to have the greatest impact in cyber security (55 per cent), compliance monitoring (52 per cent), and supply chain management (50 per cent). As AI adoption accelerates, organisations will increasingly face rising cyber security and data risks.
Risk often stems from a lack of understanding. By fostering a culture of awareness, organisations can turn unknown risks into manageable challenges. Our research found that nearly half of organisations are already providing employee training on secure and ethical AI use, and 46 per cent are deploying AI-specific security tools to strengthen their defences.
To address today’s risks, Australian organisations should:
- Build employee awareness and skills to use AI securely, ethically, and responsibly
- Invest in advanced detection and prevention tools to mitigate AI-related threats such as data leakage, unauthorised access, and model exploitation
- Limit exposure to sensitive information through strong access controls, data minimisation, and internal governance policies
- Conduct AI and privacy impact assessments to identify, assess, and address risks early in the lifecycle
- Embed AI oversight into cyber security and data risk frameworks to ensure accountability and reduce the likelihood of unmanaged threats.
By integrating these practical steps into a broader AI governance strategy, Australian organisations can manage risk proactively, build data trust, and strengthen both regulatory compliance and operational resilience.
Future-proofing AI adoption in Australia
AI adoption in Australia is accelerating, but so too are the cyber security and data risks that come with it. Data security concerns remain one of the biggest barriers to scaling AI. To remain competitive and resilient, organisations must focus on responsible adoption, not just rapid deployment.
Organisations that embrace a “fail fast, learn faster” mindset, supported by structured governance and continuous risk monitoring, are better positioned for successful AI integration. As technology evolves, it’s essential to:
- Set clear objectives for AI use, ensuring alignment with business strategy and risk appetite
- Continuously monitor performance and risk indicators, including cyber security and data protection measures
- Stay aligned with evolving regulatory and policy settings to ensure AI adoption remains secure, compliant, and resilient.
By embedding these principles into an AI governance strategy, Australian organisations can build resilience, strengthen data trust, and adapt confidently to an evolving regulatory and threat landscape, turning AI risk into a strategic advantage rather than a vulnerability.
Get in touch
This Cyber Awareness Month, take a proactive step toward strengthening your organisation’s cyber resilience. BDO's cyber security team supports organisations to design and implement robust AI governance strategies, helping you manage risk, meet regulatory obligations, and future-proof your AI adoption.