top of page
ree

A New Paradigm for Board Oversight

Boards today face a cybersecurity paradox. The same AI technologies promising to revolutionise governance efficiency are simultaneously being weaponised to create cyber threats at an unprecedented scale. Recent data reveals that 80% of Chief Information Security Officers now cite AI-powered attacks as their top concern - a dramatic leap from fifth place just a year ago. Meanwhile, Fortune 100 companies have tripled their AI and cybersecurity oversight disclosures, from 16% in 2024 to 48% in 2025. This convergence of AI adoption and escalating cyber risk demands that boards fundamentally rethink their approach to cybersecurity governance.​

 

The traditional board cybersecurity playbook—focused on policies, incident response plans, and secure document management—remains necessary but insufficient. Directors must now navigate what Microsoft's 2025 Digital Defence Report characterises as a "defining moment in cybersecurity," where adversaries leverage AI to attack with both greater volume and precision than ever before. The challenge is no longer simply protecting the digital perimeter; it's managing a constantly shifting battlefield where AI serves as both weapon and shield.​


The Dual-Edged Nature of AI in Cyber Risk - AI as Threat Multiplier

Cybercriminals have transformed AI from an experimental technology into operational weaponry with stunning speed. The sophistication of these attacks challenges every assumption boards have made about cybersecurity risk assessment. Warren Buffett has warned that AI-enabled scamming could become the "growth industry of all time" due to the technology's ability to create highly realistic fake content. He described AI as being similar to a "genie out of the bottle," with enormous potential for both good and harm

 

Consider the emerging threat landscape: Social Engineering at Scale: Generative AI now enables attackers to craft highly personalised phishing campaigns that adapt in real-time, bypassing traditional defences. The FBI recently confirmed that 300 companies unknowingly hired sophisticated operatives using AI-generated profiles and stolen credentials—demonstrating how AI-powered social engineering can penetrate even security-conscious organisations. For boards, this means that the human vulnerability you've addressed through awareness training is now being exploited by machines capable of learning from every failed attempt.​

 

Executive Impersonation: Perhaps most alarming for directors is the rise of deepfake technology enabling convincing impersonation of board members and C-suite executives. These AI-generated audio and video communications can authorise fraudulent transactions, manipulate stock prices, or extract sensitive strategic information. Unlike traditional phishing emails with telltale grammatical errors, these attacks exploit the trust inherent in familiar voices and faces.​

 

Accelerated Attack Cycles: AI has compressed attack timelines dramatically. Breakout times—the period between initial compromise and lateral movement across networks—now frequently occur in under an hour. For boards accustomed to incident response plans measured in days, this acceleration fundamentally challenges oversight assumptions about detection and containment windows.​

 

Ransomware Evolution: AI-powered ransomware can now automatically generate malicious code variants, increasing both volume and sophistication. Industrial and healthcare organisations face particularly acute risks, as AI enables attacks that extend beyond digital systems into physical operations with implications for public safety and service continuity.​


AI as Defensive Game-Changer

The encouraging counterpoint is that AI offers defenders unprecedented capabilities—if deployed strategically. Organisations using AI security tools can now identify and contain breaches within an average of 241 days, the fastest response time in nine years. Global data breach costs have declined 9% to $4.44 million, driven primarily by AI-powered defences.​

 

Predictive Threat Detection: Machine learning algorithms analyse patterns from millions of data points to predict attacks before they occur, identifying anomalies impossible for human analysts to spot. This transforms cybersecurity from reactive incident response to proactive threat anticipation.​

Automated Response at Machine Speed: AI-driven incident escalation systems can contain breaches exponentially faster than human teams. By automating lower-risk tasks such as routine monitoring and compliance checks, organisations free security teams to focus on high-priority threats requiring human judgment.​

Continuous Defence Testing: Advanced organisations now deploy AI for continuous "red teaming" - moving beyond annual penetration testing to perform ongoing live-system monitoring. This allows defenders to understand the hacker's perspective and allocate resources optimally, potentially outmaneuvering attackers before strikes occur.​

 

For boards, the strategic imperative is clear: AI in cybersecurity is not optional, but how you govern its deployment determines whether it becomes your greatest vulnerability or your most powerful defence.


Reframing Board Responsibilities: From Compliance to Cyber Resilience

Singapore's regulatory environment provides instructive guidance for this governance evolution. The Singapore Institute of Directors' Guidelines on Cyber Security Risk Management (SGP No. 16/2020) emphasises that boards must move beyond solely relying on controls and audit-based approaches. While tactical controls remain important, they fail to elevate cybersecurity to a level where there is meaningful context to the business implications.​


The Enterprise Risk Integration Imperative

Boards must integrate cybersecurity risk management with Enterprise Risk Management (ERM), elevating it from an Information Technology department concern to a strategic enterprise-level imperative. This integration requires:​

 

Risk Appetite Definition: Boards must abandon the impractical "zero tolerance" risk appetite for cyber incidents. Instead, establish clear expectations for management's due diligence in creating appropriate risk management frameworks, defining roles and responsibilities, authorising investments to uplift cybersecurity capabilities, and agreeing on monitoring metrics.​

 

Material Impact Assessment: Seventy-seven percent of boards now discuss the material and financial implications of cybersecurity incidents—up 25 percentage points from 2022. Yet many still treat cyber risk as abstract rather than translating it into business language. Directors should demand scenario analysis showing how specific cyber events would impact revenue, operations, regulatory standing, and shareholder value.​

 

Third-Party Ecosystem Risk: AI amplifies vulnerabilities created by reliance on cloud providers, SaaS platforms, and external partners. Eighteen percent of S&P 500 companies now specifically disclose AI-related third-party and vendor risk, emphasising that strong internal safeguards cannot offset exposure if critical vendors are compromised. Boards must ensure management maps the full digital supply chain and assesses concentration risk.​

The AI Governance Overlay

The introduction of AI capabilities—whether in board management platforms like Board.Vision or across enterprise operations—requires boards to overlay additional governance considerations:

 

Transparency and Auditability: AI systems deployed in governance contexts must remain transparent and auditable. Board.Vision addresses this by labelling AI-generated content and providing comprehensive audit trails. Directors should insist that any AI touching board materials or decision-making processes includes similar transparency mechanisms.​

 

Human-AI Decision Boundaries: Singapore's Model AI Governance Framework emphasises that AI should augment rather than replace human judgment. Boards must clearly delineate which functions AI can automate (document summarisation, agenda preparation, compliance flagging) versus which require human decision-making (strategic choices, ethical judgments, risk tolerance setting).​

 

Data Protection in AI Context: Singapore's Personal Data Protection Commission framework for generative AI adds governance expectations around data protection, transparency, and accountability. When board platforms leverage Large Language Models for intelligent summarisation and natural language search—as Board.Vision does—directors must understand what data trains these models, where it's stored, and how it's protected.​

 

Agentic AI Risks: Singapore recently released draft guidelines on Securing Agentic AI, focusing specifically on autonomous systems capable of independent decision-making and goal-setting. These systems present unique risks including the "Confused Deputy" problem, where AI agents with broad privileges can leak sensitive data via automated actions. Boards deploying AI agents must ensure robust containment through least-privilege principles.​


Practical Board Actions: Questions Directors Should Ask

Effective cyber risk oversight in the AI era requires boards to ask more sophisticated questions of management. Based on guidance from the National Institute of Standards and Technology (NIST), the Singapore Cyber Security Agency, and leading governance frameworks, directors should focus on:​

 

Governance Structure Questions

  1. Who owns cybersecurity accountability? Is there clear ownership for measurement, operations, and planning? Does the CISO report directly to the CEO rather than the CIO, ensuring executive leadership receives an unvarnished view of cyber risk?​

  2. How does AI-specific cyber risk integrate into our ERM framework? Does management distinguish between traditional cyber threats and AI-amplified risks? Are we tracking AI-powered attacks as a separate risk category?​

  3. What is our board's AI literacy level? Seventy percent of directors surveyed have received cybersecurity education, but how many understand AI-specific threats and defenses? AI illiterate directors increasingly represent a liability.​

Technical Capability Questions

  1. Where are we on the journey to Zero Trust Architecture (ZTA)? Zero Trust - operating on "never trust, always verify" principles—has become a strategic imperative for managing AI - era threats. Is management implementing the NIST-compliant framework elements: verify explicitly, use least privilege access, and assume breach?​

  2. How are we protecting against AI-powered social engineering? What specific defences address deepfake executive impersonation, AI-generated phishing, and automated vulnerability exploitation?​

  3. Are we using AI defensively? What AI-powered threat intelligence, automated response, and continuous monitoring capabilities have we deployed? How do we measure their effectiveness?​

Preparedness and Response Questions

  1. Have we tested our incident response under AI-attack scenarios? Fifty-nine percent of companies now disclose cyber preparedness including tabletop exercises and simulations. Do these exercises include AI-specific attack vectors like automated lateral movement and real-time adaptive malware?​

  2. What is our board's personal cyber exposure? Seventy percent of Australian senior executives were targeted by cyber-attacks in the last 18 months. Directors have access to sensitive company data, influence over major decisions, and public profiles making them easy to research and impersonate. Yet many remain disconnected from daily cybersecurity practices. What specific protections secure board members' personal devices and communications?​

  3. How quickly can we detect and contain AI-powered attacks? Given that breakout times now frequently occur in under an hour, what is our mean time to detect, respond, and recover? How does this compare to industry benchmarks?​

Strategic and Forward-Looking Questions

  1. How does our cybersecurity strategy enable business strategy? Does management view cybersecurity as an inhibitor or an enabler of business objectives? In competitive markets, can robust cyber governance provide strategic advantage through enhanced stakeholder trust?​

  2. Are we prepared for quantum computing threats? Singapore's Cyber Security Agency recently released draft quantum computing guidelines addressing the potential threat to current public-key cryptography. Is management developing quantum-readiness strategies for long-term data confidentiality and integrity?​

  3. What external frameworks guide our approach? Do we adhere to recognised frameworks such as NIST Cybersecurity Framework, ISO 27001, or Singapore's Cyber Security Trustmark? These frameworks demonstrate to stakeholders that governance practices meet established standards.​


The Board.Vision Example: Secure Platforms as Risk Mitigation

Our first article pointed out that traditional board communication methods - printed documents and email attachments - pose significant security risks despite feeling comfortable to directors. This observation becomes even more critical in the AI era, where attackers use machine learning to identify patterns in email traffic, intercept sensitive communications, and exploit document-handling vulnerabilities.​

 

Modern board management platforms like Board.Vision address multiple cyber risk dimensions simultaneously:

 

Encrypted Infrastructure: Role-based permissions ensure access to sensitive information is well-managed and controlled. Data encryption at rest and in transit safeguards confidential information.

Comprehensive Audit Trails: Complete, time-stamped records of board activities track decisions, votes, and actions - ensuring transparency and verifiable records for legal and regulatory compliance. In an AI - enhanced environment where automated actions occur alongside human decisions, audit trails become essential for accountability.​

AI-Enhanced Security: The platform's AI capabilities - intelligent summarisation, natural language search, contextual insights—operate within the secure environment rather than requiring directors to use external GenAI tools that might leak sensitive data. This addresses a growing concern: 20% of S&P 500 companies now cite data leakage during GenAI use as a major risk.​

Institutional Knowledge Protection: By preserving board history and decisions in a secure, searchable repository, the platform protects against knowledge loss during leadership transitions - a cyber risk often overlooked in traditional security frameworks.​

 

For boards, the selection of governance technology is itself a cybersecurity risk decision. Platforms that integrate security foundations with AI capabilities—rather than bolting AI onto insecure legacy systems - represent the governance infrastructure appropriate for the intelligence age.


From Risk Management to Cyber Resilience

The ultimate goal of board cybersecurity oversight is not to eliminate risk - an impossible standard - but building cyber resilience: the organisational capacity to anticipate, withstand, recover from, and adapt to cyber incidents while continuing to deliver on strategic objectives.

 

McKinsey's recent research with the National Association of Corporate Directors emphasises that cybersecurity is now recognised as a driver of competitive advantage and critical-asset protection, not merely an investment in avoiding loss. This shift is being accelerated by rapid AI adoption and by boards taking on more oversight responsibilities.​

 

Building cyber resilience in the AI era requires boards to:

 

Embrace the Dual Nature of AI: Recognise that AI simultaneously represents your greatest cyber threat and your most powerful defence. The question is not whether to deploy AI, but how to govern its deployment to maximise defensive capabilities while minimising attack surface.​

Invest Strategically: If only 5-10% of your technology budget goes to cybersecurity, you're probably not doing enough. CISOs expect cyber budgets to grow approximately 10% this year, with particular increases in threat intelligence and application security. Boards should ensure investments align with AI-era threat priorities.​

Demand Continuous Improvement: The threat landscape evolves constantly. Boards must establish regular policy review cycles (ideally quarterly) to ensure frameworks remain current. Seventy-two percent of directors now attend individual cybersecurity education activities—up from 49% in 2022. This learning must be ongoing, not episodic.​

Foster Cross-Functional Governance: Effective cyber resilience requires coordination across IT, security, legal, risk, and business units. Establish cross-functional steering committees with executive sponsorship and clear accountability. In Singapore's context, consider how cyber governance cascades to regional subsidiaries.​

Align with Regulatory Evolution: Singapore's proactive regulatory approach—including the Cybersecurity Act 2018, the Personal Data Protection Act, and emerging AI governance frameworks—increasingly represents the expected standard of care. Directors who ignore these guidelines do so at their organisation's peril, as regulatory expectations solidify and enforcement intensifies.​


Conclusion: Board Leadership in the Intelligence Age

The convergence of AI capabilities and escalating cyber threats marks an inflection point in corporate governance. Boards can no longer delegate cybersecurity to the technology department or treat it as a compliance checkbox. As Singapore's Institute of Directors emphasises, directors have fiduciary responsibilities to stakeholders and must adopt baseline cyber security good practices to safeguard organisational sustainability.​

 

The path forward requires boards to:

  • Elevate cybersecurity from operational concern to strategic imperative, integrating it fully with enterprise risk management

  • Develop AI literacy sufficient to distinguish between genuine defensive capabilities and vendor hype

  • Demand transparency and auditability in any AI systems touching governance processes

  • Model secure practices by using modern board platforms that embed security and AI capabilities within appropriate governance frameworks

  • Ask sophisticated questions that hold management accountable for both traditional cyber hygiene and AI-era threat preparedness

  • View cyber resilience as competitive advantage rather than merely cost of doing business

 

Our earlier article established that boards must recognise the sensitivity of board-related information and the risks of traditional handling methods. Our recent work on AI agents demonstrated how intelligent assistance can make governance both more accessible and more secure.

 

This new imperative combines both: cybersecurity risk management in the AI era requires boards to embrace secure, intelligent platforms while maintaining human judgment, transparency, and accountability at the center of governance.​

 

The organisations that will thrive are those whose boards recognise that cybersecurity is no longer just an IT issue but a core business risk demanding active engagement and oversight. The financial and regulatory consequences of inadequate security measures have made cybersecurity a cornerstone of corporate governance and board responsibility. In Singapore's competitive regional market demonstrating responsible cyber and AI governance builds stakeholder confidence and creates tangible competitive advantage.​

 

The future belongs to boards that transform AI from a potential vulnerability into a powerful ally in protecting organisational value. That transformation begins with leadership that challenges comfortable assumptions, demands evidence-based risk assessment, and commits to continuous learning in a rapidly evolving threat landscape. The question for directors is not whether AI will reshape cybersecurity risk, but whether your board will lead that transformation or be overtaken by it.

 

 



Human and humanoid robots shaking hands, glowing with neon blue and purple circuits. Futuristic setting with digital lines and a dark background.

The corporate governance landscape has never been more complex. Board members and stakeholders today face an unprecedented convergence of technological disruption, regulatory complexity, and stakeholder demands that would have been unimaginable just a decade ago. Recent polling data from TSV’s participation in a Global Directors Exchange (GDX) event reveals a telling reality: while 50% of directors prefer digital solutions for document security, 77% actively use generative AI only for simple q`ueries, and many remain uncertain about AI's broader impact on governance. This uncertainty is understandable.


As governance experts observe, board members often encounter modern technology through fragmented and highly hyped media coverage, arriving at meetings with questions like "Have you thought about AI?" without the deeper understanding needed for strategic oversight. This disconnect between awareness and comprehension has created what we could call an "AI awareness gap" during the most transformative technology shift in decades. Yet beneath this complexity lies an opportunity. The future of board governance is not about replacing human judgment with artificial intelligence – it is about enhancing human judgment with AI - it is about creating a partnership where technology serves as an intelligent, ever-present assistant that makes complex governance both more accessible and more secure.


The Boardroom Transformation We're Already Living 

The digital boardroom revolution began with necessity during the global pandemic, but it has evolved into something far more sophisticated. Today's leading governance platforms demonstrate how technology can enhance rather than complicate the governance experience. Consider the remarkable capabilities now available to board members: AI-powered document summarisation, natural language interfaces, and real-time multilingual support. These are not futuristic concepts; they are working realities in secure, encrypted environments that never compromise data security.


The transformation extends beyond individual tools to create comprehensive governance ecosystems. Modern platforms can analyse board materials to identify potential risks, suggest strategic questions tailored to each director's expertise, and provide contextual insights that connect current decisions to historical patterns. This represents a fundamental shift from reactive compliance to proactive strategic oversight.


The Human Side of Technological Complexity

The most significant challenge facing boards is not technological—it is emotional and psychological. Research consistently shows that while directors recognise the importance of technology expertise, many feel overwhelmed by the pace of change and uncertain about their ability to provide meaningful oversight. This concern is entirely valid. The role of a board director has become exponentially more complex, encompassing not just financial oversight but cybersecurity governance, data privacy compliance, AI ethics, and emerging technology assessment. Traditional board education has not kept pace with these demands, creating a dangerous knowledge gap at the highest levels of corporate governance. The solution lies not in expecting every director to become a technology expert, but in providing intelligent assistance that makes complex technology approachable and actionable. Think of it as having a knowledgeable colleague who never gets tired, never misses a detail, and can instantly access any piece of relevant information from your organisation's history.


The solution isn’t turning directors into tech experts. It’s providing intelligent assistance—like a tireless colleague who never misses a detail and can instantly access relevant organisational history. 


AI as a Personal Assistant in Governance

The concept of AI as a Personal Assistant in governance represents a fundamental reimagining of how boards operate. Unlike generic AI tools that might compromise security or context, purpose-built governance assistants are designed specifically for the unique requirements of board oversight. These systems understand the nuanced language of governance, recognise the importance of regulatory compliance, and maintain the highest standards of data security. They can prepare personalised briefings for each director based on their committee assignments and areas of expertise, automatically flag potential compliance issues before they become problems, and provide real-time insights.


The beauty of this approach lies in its seamlessness. Like the best personal assistants, Agentic AI works behind the scenes to ensure everything runs smoothly while board members focus on what they do best—strategic thinking and informed decision-making. Directors do not need to learn new interfaces or remember complex commands; they simply ask questions in natural language and receive relevant, actionable responses.


Security and Trust: The Foundation of Intelligent Governance

The security concerns surrounding AI in governance are not just valid; they are essential and AI governance is itself a concern. Board discussions involve some of the most sensitive information in any organisation, and any AI system operating in this environment must meet the highest standards of security and privacy.


Leading governance platforms address this through:

  • Private cloud infrastructure

  • Transparent labeling of AI-generated content

  • Comprehensive audit trails

  • Human-centric decision-making


These systems support human judgment, offering context and insights while keeping final decisions in human hands.


Practical Applications: From Abstract to Actionable

The practical applications of AI assistants in governance extend across every aspect of board operations. During meeting preparation, the system can analyse the agenda alongside relevant historical documents, regulatory requirements, and industry trends to provide each director with personalised briefing materials.


During meetings themselves, the assistant can provide real-time fact-checking against corporate records, suggest follow-up questions based on discussion themes, and automatically capture action items and key decisions. After meetings, it can draft initial versions of minutes, track progress on action items, and provide reminders about upcoming deadlines.


Building Confidence Through Gradual Integration

The path to AI-enhanced governance does not require a dramatic overnight transformation. The most successful implementations begin with simple, low-risk applications that build confidence and demonstrate value. Organisations might start with AI-powered document summarisation, allowing board members to quickly grasp key points from lengthy reports without compromising their ability to review full documents when needed. From there, they might add question-and-answer capabilities that allow directors to query board materials in natural language. As comfort levels increase, more sophisticated features can be introduced: predictive risk analysis, automated compliance monitoring, and strategic insights generation.


Through this process, the AI assistant becomes increasingly valuable while remaining transparent and controllable. The key is to ensure that each step demonstrates clear value while maintaining the security governance standards. Success breeds confidence, and confidence enables more ambitious applications of the technology.


The Competitive Advantage of AI-Enhanced Governance

Organisations that successfully integrate AI into their governance processes are discovering significant competitive advantages. They respond faster and make informed decisions based on comprehensive data analysis. These advantages extend beyond operational efficiency to strategic insight. AI-enhanced boards can process larger volumes of information, identify patterns across longer time horizons, and consider more variables in their decision-making processes. They are also better positioned to attract and retain high-quality directors who value efficient, well-supported governance processes. The evidence supporting this transformation is compelling.

Organisations using AI governance tools report significantly faster response times to emerging risks, a substantial reduction in board preparation time, and meaningful improvements in meeting effectiveness scores. These are not marginal gains—they represent fundamental improvements in governance capability.


The Agentic AI Revolution

Agentic AI represents the next evolution—systems that proactively identify issues, suggest actions, and coordinate workflows. These assistants don’t just respond; they anticipate needs and act within defined parameters.


In governance, agentic AI can:

  • Help individuals manage their tasks and priorities

  • Prepare agendas based on deadlines and priorities

  • Flag compliance issues early

  • Coordinate follow-ups across committees


This represents a shift from reactive support to proactive governance assistance. The key to successful agentic AI implementation is to ensure these systems remain transparent, auditable, and under human control. They must enhance decision-making capacity rather than making decisions autonomously. Technology must feel to the user like having an exceptionally capable and tireless assistant rather than an autonomous system operating independently.


Looking Forward: The Governance Landscape of Tomorrow

The future of governance will feature seamless integration between human judgment and AI. Directors will arrive better prepared, equipped with insights that would be impossible to compile manually.


This transformation will democratise governance capabilities, making enterprise-grade tools accessible to organisations of all sizes. Smaller companies will implement standards once reserved for large corporations.


AI will evolve to become more predictive and proactive, anticipating needs, identifying trends, and guiding strategy. As regulations around AI governance grow, these systems will support both compliance and innovation.


Embracing the Human-AI Partnership

The future isn’t about choosing between human wisdom and AI—it’s about combining them. Humans bring judgment, ethics, and vision. AI offers analysis, pattern recognition, and efficiency.


This partnership enhances governance while preserving accountability. It acknowledges that modern business complexity requires more support than any individual can provide.


Organisations that embrace this thoughtfully will thrive. They’ll use AI to enable better human decision-making—not replace it. The future-proof boardroom isn’t about having the latest tech—it’s about making governance simpler, smarter, and more effective.

The transformation is already underway, as seen in discussions at events like the Global Directors Exchange. The question isn’t whether AI will change governance, but how quickly organisations will embrace it to make governance smarter and more human. The future belongs to boards that use AI to empower—not replace—human judgment in an increasingly complex world.


corporate-governance
Man typing on laptop using Generative AI

In an earlier article we discussed how corporate governance processes can safely be managed digitally via end-to-end secured sealed processes in a “digital vault” 


How can we enhance the user experience and productivity through smart and safe application of Artificial Intelligence (AI), especially Generative AI, so called Large Language Models and the latest iteration – AI Agents? 

 

Artificial Intelligence itself together with Machine Learning (ML) is not new - it has been around for decades especially for applications involving pattern matching and prediction.  

 

Early AI/ML models were used for playing chess, bank fraud detection, customer behaviour analysis and prediction, weather prediction, medical imaging, military applications and so on. More recently AI has shown up in digital cameras, smart phone Apps and even home appliances.  

 

AI/ML became embedded in our lives without our really noticing. We didn’t notice because the tools of AI/ML were in the hands of large organisations that had the computer power, tools and skills to build their own application-specific data models. We only needed to use the end result that was embedded in our everyday devices. 

 

ChatGPT from OpenAI changed everything. OpenAI used vast computer power to crawl the internet and create a “neural network” model of all that knowledge – a so-called Large Language Model. What’s more ChatGPT had the ability for a user to make queries and create new content based on the LLM. That’s where the word “Generative” in Gen AI comes from. 

 

ChatGPT was not the first GenAI implementation, but what OpenAI did that was revolutionary was to make ChatGPT freely available to anyone – AI for the masses!  

 

ChatGPT has since been followed by numerous other offerings, with one of the most recent being China’s DeepSeek. This took GenAI to a new level by using much less computer power and being much faster to market. 

 

Agentic AI is the latest, and perhaps most exiting capability. AI Agents are like customised AI chatbots or automated modules that we can embed in other application software. We can even create our own AI agents using everyday tools like MS365 CoPilot. 

 

By asking questions through the smart use of “prompts” we can use GenAI to create “new” content, or help us better understand existing content. And we can use AI Agents to automate of workflows.  


But there are risks: 


  1. Content produced by AI LLMs is based on an amalgamation of multiple source documents. When we use these tools, we don’t always know what those source documents are, whether they are based on copyrighted content, or even if they are true. In fact, even if the source documents are true, the AI model may combine them in a way that is no longer true. 

 

  1. In the process of submitting a prompt, we are sharing some information with the model. That means we must be very careful in the questions we ask, especially if we are using a publicly available model. We may be sharing confidential information with a 3rd party without knowing. Or our questions may subtly influence the response in a may that introduces bias. 


Knowing these risks, we can come up with some simple guidelines that help us use Gen AI safely. 

 

  1. Never share confidential or sensitive information in a prompt with publicly available models such as ChatGPT. Fortunately, it is possible to set up private instances of basically the same models such as GPT4 and, configured correctly, these can safely be used for such sensitive applications. 

 

  1. Beware of using Gen AI to conduct research or to create significant new content. Use tools that link to the source document(s) that can then be verified and then do verify the results! 

 

  1. Do use Gen AI to help understand existing content such as summarise emails, documents or PDF files. 

 

  1. Do use Gen AI to rearrange, combine, cross-reference or compare your own existing content. 

 

  1. Do use Gen AI to create simple, generic drafts of content that you can easily check for correctness, such as job descriptions, marketing plans etc. 

 

  1. Do always do a sanity check against generated content and remember that ultimately you own it! 


With the above in mind, Trusted Services (TSV) has already established its own private LLM instance for use by TSV staff.  

 

Further, TSV is using the LLM APIs to build seamless additional functionality into its Board.Vision corporate governance and compliance platform. 

 

These features include: 


  1. Summarise a document, enabling the user to quickly understand key information without reading the whole document 

 

  1. Ask direct questions of the document and receive answers in natural language, eliminating the need to search or collate manually 

 

  1. Translate sections of the document instantly, making content accessible in different languages 

 

  1. Content based search across multiple documents 

 

  1. AI Assisted creation of Surveys 

 

  1. AI Agents for natural language self-service in various areas such as “how to”, support queries, directors’ assistant, admin assistant 


board-vision-corporate-governance
board.vision screenshot

All of this takes place within the secure, sealed, encrypted Board.Vision digital vault. Data never leaves a TSV controlled environment. 

 

We look forward to developing other productivity improvements, but security and safety will always be our top priority.  


News & Insights

In this section, we have curated a wide array of content to help you stay abreast of the most topical and relevant issues impacting corporate governance in the region.

bottom of page