Cybersecurity Risk Management in the Age of Agentic AI
- Trusted Services
- 2 days ago
- 10 min read
This article builds on concepts introduced in our previous articles “Cyber Security Imperatives for the Board of Directors”, “Safe use of Generative AI to enhance productivity in Boards of Directors' Corporate Governance” and “The Future-Proof Board: How Embedded AI Agents Are Transforming Governance Through Friendly, Secure Digital Partnership”. Here we focus on the Cybersecurity implications of this rapidly developing area.

A New Paradigm for Board Oversight
Boards today face a cybersecurity paradox. The same AI technologies promising to revolutionise governance efficiency are simultaneously being weaponised to create cyber threats at an unprecedented scale. Recent data reveals that 80% of Chief Information Security Officers now cite AI-powered attacks as their top concern - a dramatic leap from fifth place just a year ago. Meanwhile, Fortune 100 companies have tripled their AI and cybersecurity oversight disclosures, from 16% in 2024 to 48% in 2025. This convergence of AI adoption and escalating cyber risk demands that boards fundamentally rethink their approach to cybersecurity governance.
The traditional board cybersecurity playbook—focused on policies, incident response plans, and secure document management—remains necessary but insufficient. Directors must now navigate what Microsoft's 2025 Digital Defence Report characterises as a "defining moment in cybersecurity," where adversaries leverage AI to attack with both greater volume and precision than ever before. The challenge is no longer simply protecting the digital perimeter; it's managing a constantly shifting battlefield where AI serves as both weapon and shield.
The Dual-Edged Nature of AI in Cyber Risk - AI as Threat Multiplier
Cybercriminals have transformed AI from an experimental technology into operational weaponry with stunning speed. The sophistication of these attacks challenges every assumption boards have made about cybersecurity risk assessment. Warren Buffett has warned that AI-enabled scamming could become the "growth industry of all time" due to the technology's ability to create highly realistic fake content. He described AI as being similar to a "genie out of the bottle," with enormous potential for both good and harm
Consider the emerging threat landscape: Social Engineering at Scale: Generative AI now enables attackers to craft highly personalised phishing campaigns that adapt in real-time, bypassing traditional defences. The FBI recently confirmed that 300 companies unknowingly hired sophisticated operatives using AI-generated profiles and stolen credentials—demonstrating how AI-powered social engineering can penetrate even security-conscious organisations. For boards, this means that the human vulnerability you've addressed through awareness training is now being exploited by machines capable of learning from every failed attempt.
Executive Impersonation: Perhaps most alarming for directors is the rise of deepfake technology enabling convincing impersonation of board members and C-suite executives. These AI-generated audio and video communications can authorise fraudulent transactions, manipulate stock prices, or extract sensitive strategic information. Unlike traditional phishing emails with telltale grammatical errors, these attacks exploit the trust inherent in familiar voices and faces.
Accelerated Attack Cycles: AI has compressed attack timelines dramatically. Breakout times—the period between initial compromise and lateral movement across networks—now frequently occur in under an hour. For boards accustomed to incident response plans measured in days, this acceleration fundamentally challenges oversight assumptions about detection and containment windows.
Ransomware Evolution: AI-powered ransomware can now automatically generate malicious code variants, increasing both volume and sophistication. Industrial and healthcare organisations face particularly acute risks, as AI enables attacks that extend beyond digital systems into physical operations with implications for public safety and service continuity.
AI as Defensive Game-Changer
The encouraging counterpoint is that AI offers defenders unprecedented capabilities—if deployed strategically. Organisations using AI security tools can now identify and contain breaches within an average of 241 days, the fastest response time in nine years. Global data breach costs have declined 9% to $4.44 million, driven primarily by AI-powered defences.
Predictive Threat Detection: Machine learning algorithms analyse patterns from millions of data points to predict attacks before they occur, identifying anomalies impossible for human analysts to spot. This transforms cybersecurity from reactive incident response to proactive threat anticipation.
Automated Response at Machine Speed: AI-driven incident escalation systems can contain breaches exponentially faster than human teams. By automating lower-risk tasks such as routine monitoring and compliance checks, organisations free security teams to focus on high-priority threats requiring human judgment.
Continuous Defence Testing: Advanced organisations now deploy AI for continuous "red teaming" - moving beyond annual penetration testing to perform ongoing live-system monitoring. This allows defenders to understand the hacker's perspective and allocate resources optimally, potentially outmaneuvering attackers before strikes occur.
For boards, the strategic imperative is clear: AI in cybersecurity is not optional, but how you govern its deployment determines whether it becomes your greatest vulnerability or your most powerful defence.
Reframing Board Responsibilities: From Compliance to Cyber Resilience
Singapore's regulatory environment provides instructive guidance for this governance evolution. The Singapore Institute of Directors' Guidelines on Cyber Security Risk Management (SGP No. 16/2020) emphasises that boards must move beyond solely relying on controls and audit-based approaches. While tactical controls remain important, they fail to elevate cybersecurity to a level where there is meaningful context to the business implications.
The Enterprise Risk Integration Imperative
Boards must integrate cybersecurity risk management with Enterprise Risk Management (ERM), elevating it from an Information Technology department concern to a strategic enterprise-level imperative. This integration requires:
Risk Appetite Definition: Boards must abandon the impractical "zero tolerance" risk appetite for cyber incidents. Instead, establish clear expectations for management's due diligence in creating appropriate risk management frameworks, defining roles and responsibilities, authorising investments to uplift cybersecurity capabilities, and agreeing on monitoring metrics.
Material Impact Assessment: Seventy-seven percent of boards now discuss the material and financial implications of cybersecurity incidents—up 25 percentage points from 2022. Yet many still treat cyber risk as abstract rather than translating it into business language. Directors should demand scenario analysis showing how specific cyber events would impact revenue, operations, regulatory standing, and shareholder value.
Third-Party Ecosystem Risk: AI amplifies vulnerabilities created by reliance on cloud providers, SaaS platforms, and external partners. Eighteen percent of S&P 500 companies now specifically disclose AI-related third-party and vendor risk, emphasising that strong internal safeguards cannot offset exposure if critical vendors are compromised. Boards must ensure management maps the full digital supply chain and assesses concentration risk.
The AI Governance Overlay
The introduction of AI capabilities—whether in board management platforms like Board.Vision or across enterprise operations—requires boards to overlay additional governance considerations:
Transparency and Auditability: AI systems deployed in governance contexts must remain transparent and auditable. Board.Vision addresses this by labelling AI-generated content and providing comprehensive audit trails. Directors should insist that any AI touching board materials or decision-making processes includes similar transparency mechanisms.
Human-AI Decision Boundaries: Singapore's Model AI Governance Framework emphasises that AI should augment rather than replace human judgment. Boards must clearly delineate which functions AI can automate (document summarisation, agenda preparation, compliance flagging) versus which require human decision-making (strategic choices, ethical judgments, risk tolerance setting).
Data Protection in AI Context: Singapore's Personal Data Protection Commission framework for generative AI adds governance expectations around data protection, transparency, and accountability. When board platforms leverage Large Language Models for intelligent summarisation and natural language search—as Board.Vision does—directors must understand what data trains these models, where it's stored, and how it's protected.
Agentic AI Risks: Singapore recently released draft guidelines on Securing Agentic AI, focusing specifically on autonomous systems capable of independent decision-making and goal-setting. These systems present unique risks including the "Confused Deputy" problem, where AI agents with broad privileges can leak sensitive data via automated actions. Boards deploying AI agents must ensure robust containment through least-privilege principles.
Practical Board Actions: Questions Directors Should Ask
Effective cyber risk oversight in the AI era requires boards to ask more sophisticated questions of management. Based on guidance from the National Institute of Standards and Technology (NIST), the Singapore Cyber Security Agency, and leading governance frameworks, directors should focus on:
Governance Structure Questions
Who owns cybersecurity accountability? Is there clear ownership for measurement, operations, and planning? Does the CISO report directly to the CEO rather than the CIO, ensuring executive leadership receives an unvarnished view of cyber risk?
How does AI-specific cyber risk integrate into our ERM framework? Does management distinguish between traditional cyber threats and AI-amplified risks? Are we tracking AI-powered attacks as a separate risk category?
What is our board's AI literacy level? Seventy percent of directors surveyed have received cybersecurity education, but how many understand AI-specific threats and defenses? AI illiterate directors increasingly represent a liability.
Technical Capability Questions
Where are we on the journey to Zero Trust Architecture (ZTA)? Zero Trust - operating on "never trust, always verify" principles—has become a strategic imperative for managing AI - era threats. Is management implementing the NIST-compliant framework elements: verify explicitly, use least privilege access, and assume breach?
How are we protecting against AI-powered social engineering? What specific defences address deepfake executive impersonation, AI-generated phishing, and automated vulnerability exploitation?
Are we using AI defensively? What AI-powered threat intelligence, automated response, and continuous monitoring capabilities have we deployed? How do we measure their effectiveness?
Preparedness and Response Questions
Have we tested our incident response under AI-attack scenarios? Fifty-nine percent of companies now disclose cyber preparedness including tabletop exercises and simulations. Do these exercises include AI-specific attack vectors like automated lateral movement and real-time adaptive malware?
What is our board's personal cyber exposure? Seventy percent of Australian senior executives were targeted by cyber-attacks in the last 18 months. Directors have access to sensitive company data, influence over major decisions, and public profiles making them easy to research and impersonate. Yet many remain disconnected from daily cybersecurity practices. What specific protections secure board members' personal devices and communications?
How quickly can we detect and contain AI-powered attacks? Given that breakout times now frequently occur in under an hour, what is our mean time to detect, respond, and recover? How does this compare to industry benchmarks?
Strategic and Forward-Looking Questions
How does our cybersecurity strategy enable business strategy? Does management view cybersecurity as an inhibitor or an enabler of business objectives? In competitive markets, can robust cyber governance provide strategic advantage through enhanced stakeholder trust?
Are we prepared for quantum computing threats? Singapore's Cyber Security Agency recently released draft quantum computing guidelines addressing the potential threat to current public-key cryptography. Is management developing quantum-readiness strategies for long-term data confidentiality and integrity?
What external frameworks guide our approach? Do we adhere to recognised frameworks such as NIST Cybersecurity Framework, ISO 27001, or Singapore's Cyber Security Trustmark? These frameworks demonstrate to stakeholders that governance practices meet established standards.
The Board.Vision Example: Secure Platforms as Risk Mitigation
Our first article pointed out that traditional board communication methods - printed documents and email attachments - pose significant security risks despite feeling comfortable to directors. This observation becomes even more critical in the AI era, where attackers use machine learning to identify patterns in email traffic, intercept sensitive communications, and exploit document-handling vulnerabilities.
Modern board management platforms like Board.Vision address multiple cyber risk dimensions simultaneously:
Encrypted Infrastructure: Role-based permissions ensure access to sensitive information is well-managed and controlled. Data encryption at rest and in transit safeguards confidential information.
Comprehensive Audit Trails: Complete, time-stamped records of board activities track decisions, votes, and actions - ensuring transparency and verifiable records for legal and regulatory compliance. In an AI - enhanced environment where automated actions occur alongside human decisions, audit trails become essential for accountability.
AI-Enhanced Security: The platform's AI capabilities - intelligent summarisation, natural language search, contextual insights—operate within the secure environment rather than requiring directors to use external GenAI tools that might leak sensitive data. This addresses a growing concern: 20% of S&P 500 companies now cite data leakage during GenAI use as a major risk.
Institutional Knowledge Protection: By preserving board history and decisions in a secure, searchable repository, the platform protects against knowledge loss during leadership transitions - a cyber risk often overlooked in traditional security frameworks.
For boards, the selection of governance technology is itself a cybersecurity risk decision. Platforms that integrate security foundations with AI capabilities—rather than bolting AI onto insecure legacy systems - represent the governance infrastructure appropriate for the intelligence age.
From Risk Management to Cyber Resilience
The ultimate goal of board cybersecurity oversight is not to eliminate risk - an impossible standard - but building cyber resilience: the organisational capacity to anticipate, withstand, recover from, and adapt to cyber incidents while continuing to deliver on strategic objectives.
McKinsey's recent research with the National Association of Corporate Directors emphasises that cybersecurity is now recognised as a driver of competitive advantage and critical-asset protection, not merely an investment in avoiding loss. This shift is being accelerated by rapid AI adoption and by boards taking on more oversight responsibilities.
Building cyber resilience in the AI era requires boards to:
Embrace the Dual Nature of AI: Recognise that AI simultaneously represents your greatest cyber threat and your most powerful defence. The question is not whether to deploy AI, but how to govern its deployment to maximise defensive capabilities while minimising attack surface.
Invest Strategically: If only 5-10% of your technology budget goes to cybersecurity, you're probably not doing enough. CISOs expect cyber budgets to grow approximately 10% this year, with particular increases in threat intelligence and application security. Boards should ensure investments align with AI-era threat priorities.
Demand Continuous Improvement: The threat landscape evolves constantly. Boards must establish regular policy review cycles (ideally quarterly) to ensure frameworks remain current. Seventy-two percent of directors now attend individual cybersecurity education activities—up from 49% in 2022. This learning must be ongoing, not episodic.
Foster Cross-Functional Governance: Effective cyber resilience requires coordination across IT, security, legal, risk, and business units. Establish cross-functional steering committees with executive sponsorship and clear accountability. In Singapore's context, consider how cyber governance cascades to regional subsidiaries.
Align with Regulatory Evolution: Singapore's proactive regulatory approach—including the Cybersecurity Act 2018, the Personal Data Protection Act, and emerging AI governance frameworks—increasingly represents the expected standard of care. Directors who ignore these guidelines do so at their organisation's peril, as regulatory expectations solidify and enforcement intensifies.
Conclusion: Board Leadership in the Intelligence Age
The convergence of AI capabilities and escalating cyber threats marks an inflection point in corporate governance. Boards can no longer delegate cybersecurity to the technology department or treat it as a compliance checkbox. As Singapore's Institute of Directors emphasises, directors have fiduciary responsibilities to stakeholders and must adopt baseline cyber security good practices to safeguard organisational sustainability.
The path forward requires boards to:
Elevate cybersecurity from operational concern to strategic imperative, integrating it fully with enterprise risk management
Develop AI literacy sufficient to distinguish between genuine defensive capabilities and vendor hype
Demand transparency and auditability in any AI systems touching governance processes
Model secure practices by using modern board platforms that embed security and AI capabilities within appropriate governance frameworks
Ask sophisticated questions that hold management accountable for both traditional cyber hygiene and AI-era threat preparedness
View cyber resilience as competitive advantage rather than merely cost of doing business
Our earlier article established that boards must recognise the sensitivity of board-related information and the risks of traditional handling methods. Our recent work on AI agents demonstrated how intelligent assistance can make governance both more accessible and more secure.
This new imperative combines both: cybersecurity risk management in the AI era requires boards to embrace secure, intelligent platforms while maintaining human judgment, transparency, and accountability at the center of governance.
The organisations that will thrive are those whose boards recognise that cybersecurity is no longer just an IT issue but a core business risk demanding active engagement and oversight. The financial and regulatory consequences of inadequate security measures have made cybersecurity a cornerstone of corporate governance and board responsibility. In Singapore's competitive regional market demonstrating responsible cyber and AI governance builds stakeholder confidence and creates tangible competitive advantage.
The future belongs to boards that transform AI from a potential vulnerability into a powerful ally in protecting organisational value. That transformation begins with leadership that challenges comfortable assumptions, demands evidence-based risk assessment, and commits to continuous learning in a rapidly evolving threat landscape. The question for directors is not whether AI will reshape cybersecurity risk, but whether your board will lead that transformation or be overtaken by it.
