TLDR
A 2024 survey by KPMG found that 68% of organisations had employees using AI tools that had not been approved or reviewed by IT or legal — a phenomenon known as shadow AI. Most governance frameworks don't account for it. This guide gives you the AI Governance Stack framework (Policy, Accountability, Controls, Audit), a plain-English breakdown of the EU AI Act's obligations, and a 30-day governance audit process that surfaces the risks your organisation has already taken without knowing it.
Contents
- Why Governance Comes Before Strategy
- The Four AI Risk Categories Leaders Must Understand
- Building an AI Policy That People Actually Follow
- Accountability Structures: Who Owns AI Decisions in Your Organisation
- Data Governance and AI: The Hidden Overlap
- The EU AI Act: What Leaders Need to Know in Plain English
- Shadow AI: The Governance Problem No One Talks About
- Case Study: Financial Services Firm — Enterprise AI Governance Rollout
- The AI Governance Audit: A 30-Day Framework
- Getting Board-Level Buy-In for AI Governance
Why Governance Comes Before Strategy
The instinct in most organisations is to treat governance as the last step — the compliance wrap that goes around an AI strategy once it's been built. This sequence is wrong, and the cost of getting it backwards is accumulating across the enterprise sector in the form of data breaches, regulatory inquiries, and reputational incidents that could have been prevented with a basic policy framework.
Governance is not a constraint on AI strategy. It is the foundation that makes strategy possible at scale. Without a governance framework, every AI initiative is a one-off experiment with individually negotiated rules. With a governance framework, every AI initiative operates within a consistent structure — approved tools, clear data handling rules, defined review processes, and an accountability structure that ensures someone is responsible for outcomes. Strategy built on that foundation moves faster, not slower, because teams don't have to reinvent the governance question every time they start a new initiative.
The organisations that get this right treat AI governance the way they treat financial controls — not as a bureaucratic overhead, but as the infrastructure that allows confident action. A finance team with good controls moves faster than one without them, because the controls reduce the cognitive overhead of every transaction decision. The same principle applies to AI. Organisations with clear AI governance make decisions about AI use faster because they've pre-resolved the governance questions. Organisations without it slow down on every decision because every new use case reopens the same fundamental questions about data, accountability, and risk.
The second reason governance must come first is regulatory. The EU AI Act is in force. UK GDPR applies to AI processing of personal data. The ICO has published guidance on AI and data protection that organisations are expected to follow. If your AI strategy exists without a governance framework that accounts for these obligations, you have a compliance exposure that your strategy is accumulating on — and the longer it runs without governance, the larger that exposure becomes.
This guide gives you the AI Governance Stack — a four-layer framework for building governance that works at enterprise scale. It is designed to be implemented incrementally, starting with a one-page policy and an approved tool list, and building upward as the organisation's AI use matures.
The Four AI Risk Categories Leaders Must Understand
AI risk is not a monolith. Treating all AI risks the same leads to either over-governance (applying heavy controls to low-risk applications) or under-governance (applying light controls to high-risk ones). The practical approach is a four-category risk model that allows proportionate governance: operational risk, data risk, compliance risk, and reputational risk.
Operational risk is the risk that an AI system produces an incorrect, incomplete, or misleading output that is acted upon without adequate review, causing operational harm. The most common example is AI-generated text that contains factual errors — a report with wrong figures, a client communication with incorrect information, a regulatory filing with inaccurate details. Operational risk is managed through output review processes: defining which AI outputs require human review before use, by whom, and under what standard.
Data risk is the risk that confidential, personal, or proprietary data is exposed, misused, or processed in ways that breach data protection obligations when AI tools are used. The most common example is an employee who pastes client data, employee records, or commercially sensitive material into a public AI tool — a consumer chatbot interface that may use that data to train the underlying model or that stores it on servers outside the organisation's control. Data risk is managed through data handling protocols: specifying what categories of data can and cannot be used with which tools, and ensuring employees understand the rules.
Compliance risk is the risk of breaching legal or regulatory obligations through AI use. This includes EU AI Act obligations for high-risk AI applications, UK GDPR obligations for AI processing of personal data, sector-specific regulations (FCA requirements for AI in financial decisions, ICO guidance on AI in HR decisions), and contractual obligations to clients about how their data is handled. Compliance risk requires active legal review of AI use cases that touch regulated domains — it cannot be managed through general policy alone.
Reputational risk is the risk that AI use — even if technically legal and operationally sound — produces outcomes that harm the organisation's reputation if made public. This includes AI-generated content that is tone-deaf, offensive, or inconsistent with the organisation's values. It includes AI systems that produce biased outputs in hiring or service delivery. It includes the perception that important decisions affecting people were made entirely by automated systems without human judgment. Reputational risk requires editorial standards for AI-assisted communications and explicit human accountability for high-visibility decisions.
The four categories require different controls and different accountabilities. A framework that treats them identically will misallocate effort and leave gaps where it matters most.
Building an AI Policy That People Actually Follow
Most AI policies fail for the same reason most policies fail: they are written for legal protection rather than practical use, and they produce either confusion ("I'm not sure if this is allowed") or disengagement ("nobody actually reads this"). The result is that the organisation has a policy on paper and shadow AI in practice.
An AI policy that people actually follow has four characteristics. It is short enough to read. It is specific enough to answer the questions people actually have. It is positive — it tells people what they can do, not just what they can't. And it has a clear escalation path — so when someone encounters a situation the policy doesn't cover, they know exactly who to ask.
The content that every AI policy must address, regardless of sector or organisation size, is:
Approved tools. A list of AI tools that have been reviewed and approved for use in the organisation. Not a comprehensive list of every AI tool in existence, but a curated list of tools that have been assessed for data security, compliance with your data handling obligations, and suitability for professional use. Any tool not on the list requires approval before use. This is the single most important element of an AI policy — without it, there is no baseline for what is and isn't permitted.
Data handling rules. A clear statement of what categories of data can be used with AI tools, and which cannot. Personal data about employees or clients should not be used with public AI tools that may process or retain that data. Commercially sensitive information, client data, and proprietary methodology should be subject to the same restrictions. A practical three-category system works well: data that can be used with any approved tool, data that can only be used with approved enterprise tools under a data processing agreement, and data that cannot be used with any AI tool.
Output review standards. A statement that AI outputs used for external purposes — client communications, regulatory filings, public documents — must be reviewed by a qualified professional before use. This is not a statement that AI can't be used for these purposes. It is a statement that human accountability attaches to the output, regardless of how it was produced.
Accountability statement. A clear statement that the professional who uses an AI tool is accountable for the output, regardless of how the output was generated. This is important for two reasons. It prevents the defence of "the AI produced it" and it establishes the right mental model — AI is a tool that produces drafts, proposals, and analyses, not a decision-maker that removes human accountability from outcomes.
The policy document itself should be one to two pages. The supporting tool registry — the approved tools list with brief descriptions of what each is approved for — can be longer and should be updated regularly as new tools are reviewed. Together, these two documents constitute a functional AI governance foundation that an organisation of any size can implement.
Accountability Structures: Who Owns AI Decisions in Your Organisation
Governance without accountability is aspiration. For an AI governance framework to work, every AI-related decision must have a named owner — a person who is accountable for the outcome and empowered to make the decision. In most organisations, this accountability structure doesn't exist, and the vacuum creates both governance risk and decision paralysis.
The AI Governance Stack framework structures accountability across four layers, corresponding to the four types of decision that need owners.
Policy layer: Who owns the AI policy itself? Who has the authority to update it, grant exceptions, and communicate changes to the organisation? In most organisations, this is a joint accountable between the Chief Information Officer (or equivalent) for technical standards and the General Counsel (or equivalent) for legal and compliance standards. Where these roles don't exist, the accountability should sit with whoever owns legal risk and technology strategy respectively. The policy owner is the court of last resort for "is this use case permitted?" questions.
Accountability layer: Who is accountable for the AI use within each function? Not the person who uses the tool, but the person who is responsible for the function's compliance with the AI policy. In practice, this is typically the functional head — the Head of Operations, the Head of Finance, the Head of Legal — who is accountable for ensuring their team follows the policy and for escalating edge cases. Naming these people explicitly, and including AI governance in their accountability frameworks, turns the policy from a document into an active management structure.
Controls layer: Who is responsible for implementing and maintaining the technical controls that support the policy — the approved tool registry, the data classification framework, the output review checklists? This is typically an IT or information security function, but in smaller organisations may be a designated individual. Controls accountability is distinct from policy accountability — it's the operational work of keeping the governance infrastructure current and functional.
Audit layer: Who is responsible for periodically reviewing whether the organisation is actually following its AI policy? This should be independent of the Policy and Controls layers — ideally an internal audit function or a designated review responsibility in the risk and compliance team. Without periodic audit, policies drift from practice over time, and the gap between stated governance and actual governance widens without anyone noticing until it becomes a problem.
Data Governance and AI: The Hidden Overlap
Most organisations that have invested in data governance — defining data classifications, managing access rights, implementing data protection controls — have already built most of the infrastructure they need for AI data governance. The problem is that the two frameworks are typically managed separately and designed without reference to each other.
AI data governance is not a new domain. It is an extension of data governance into a new context: the use of personal and proprietary data as input to AI systems. The questions are the same — what data is being processed, by whom, for what purpose, under what legal basis, with what security controls — but the context creates new wrinkles that data governance frameworks written before the AI era don't adequately address.
Three specific overlaps require active management. First, the question of whether using data with an AI tool constitutes processing under GDPR. The ICO's guidance on AI and data protection is clear that processing personal data through an AI system falls within the GDPR's definition of processing, and that all the usual obligations apply — lawful basis, purpose limitation, data minimisation, and so on. An organisation whose employees are routinely processing personal data through consumer AI tools is likely in breach of these obligations whether or not they have a policy saying they shouldn't be.
Second, the question of whether AI systems used to make or inform decisions about individuals constitute automated decision-making under GDPR. Automated decision-making with legal or similarly significant effects requires specific safeguards, including the right to human review. If your organisation uses AI to screen CVs, flag compliance concerns, or route customer complaints, and if those AI outputs inform consequential decisions without human review, you have an automated decision-making obligation that requires active management.
Third, the question of intellectual property and proprietary data. When employees use AI tools to process proprietary methodologies, client-specific work products, or commercially sensitive analyses, the data handling terms of the AI tool matter. Some consumer AI tools use input data to improve the underlying model — meaning your proprietary work is potentially becoming part of a model that other users can access. Enterprise AI tools with data processing agreements typically do not, but employees using consumer tools don't always know the difference. Data governance must specify which tool categories are acceptable for which data types, and communicate this clearly.
The EU AI Act: What Leaders Need to Know in Plain English
The EU AI Act is the world's first comprehensive regulatory framework for AI, and it came into full effect in 2025 for most of its provisions. It applies to any organisation that places AI systems on the EU market or uses AI systems that affect EU citizens — which, in practice, means most mid-to-large UK organisations operating in international markets.
The Act structures obligations around four risk levels. Unacceptable risk applications are prohibited outright: these include social scoring systems that evaluate individuals for access to services, AI systems designed to manipulate human behaviour in harmful ways, and mass real-time biometric surveillance. These prohibitions took effect in February 2025. High-risk applications are permitted but regulated: they require conformity assessments, technical documentation, human oversight mechanisms, and registration with the EU AI database. The categories of high-risk AI include employment decisions (recruitment, performance management, task allocation), credit and insurance risk assessment, biometric identification, law enforcement applications, and access to essential services. General purpose AI and limited-risk applications face lighter transparency requirements. Minimal risk applications face no specific obligations.
The practical implications for most enterprises are concentrated in the high-risk category. If you use AI in HR decisions — automated screening, performance assessment, promotion flagging — that is a high-risk application. If you use AI in credit or fraud decisions affecting individuals, that is high-risk. If you deploy an AI customer service system that makes consequential routing or service decisions, that may be high-risk depending on the context. The Act does not prevent you from doing any of these things. It requires that you do them with documented human oversight, explainable processes, and registration where required.
Non-compliance carries significant penalties: up to €35 million or 7% of global annual turnover for violations of the prohibited applications provisions; up to €15 million or 3% of global annual turnover for other violations. These are not hypothetical — the European AI Office, created to oversee enforcement, began operations in 2025 and has already issued guidance that signals active enforcement intent in the medium term.
For UK organisations, the EU AI Act does not directly apply through UK domestic law post-Brexit, but it applies to UK organisations operating in the EU market, and the UK government has signalled that AI-specific regulation is forthcoming. The practical approach is to use the EU AI Act's risk framework as a planning tool regardless of direct legal application — it provides a rigorous and well-developed framework for thinking about AI risk that is likely to be influential in any UK regulatory development.
Shadow AI: The Governance Problem No One Talks About
Shadow AI is the use of AI tools by employees outside of approved channels — without IT review, without legal sign-off, and without governance oversight. It is pervasive, it is growing, and it is almost certainly more widespread in your organisation than your leadership team believes.
According to KPMG's 2024 Enterprise AI Adoption research, 68% of organisations had employees regularly using consumer AI tools for work tasks without IT or legal knowledge. The term "regularly" is important here — this is not occasional personal use that bleeds into work contexts. This is consistent professional practice by a majority of knowledge workers in the majority of organisations that have not established a clear approved tool framework.
Shadow AI exists because of a policy vacuum. Employees have genuine productivity needs. AI tools address those needs effectively. If the organisation hasn't told them which tools are approved and how to use them safely, they use whichever tools they have access to — typically the consumer tools they use personally. This is not malice. It is rational problem-solving in the absence of guidance. The policy vacuum, not the employee behaviour, is the root cause.
The risks shadow AI creates are not theoretical. They include: confidential data being processed by AI tools under terms of service that allow the provider to use that data for model training; client data being exposed to AI tools that don't meet the organisation's data processing obligations; AI-generated outputs being used without review in client-facing work, creating quality and accuracy risk; and the organisation being unable to audit its own AI use because the AI use is invisible to leadership.
Shadow AI cannot be addressed by prohibition alone. Blanket bans on AI tool use don't eliminate the behaviour — they drive it further underground, making it invisible to governance entirely. The approach that works is to replace the policy vacuum with a clear, positive framework: here are the tools that are approved, here is how to use them appropriately, and here is how to request approval for a new tool if you need one. When employees have a legitimate route to approved AI use, the motivation for shadow AI largely disappears.
The second element of addressing shadow AI is discovery — running an audit to understand what tools are already in use before designing the governance framework. Trying to govern what you don't know about is guesswork. A structured 30-day governance audit, described in Section 9, is the practical starting point.
Case Study: Financial Services Firm — Enterprise AI Governance Rollout
Case Study
A UK-based financial services firm with 2,200 employees reached an inflection point when a junior analyst, working on a client project under time pressure, used an unapproved consumer AI tool to summarise a set of client financial documents. The tool was a consumer chatbot with no data processing agreement. The incident was identified internally before any data breach occurred, but it triggered a governance review that revealed the depth of the shadow AI problem. Within 30 days of the review, the firm had identified 47 instances of unapproved AI tool use across 6 departments, involving tools ranging from consumer chatbots to AI writing assistants to no-code AI builders. None had been reviewed by IT or legal. Three involved processing of data that was subject to client confidentiality obligations. The firm implemented the AI Governance Stack framework over the following 90 days. Layer 1 (Policy): a two-page AI usage policy, an approved tool registry of 7 tools reviewed and cleared for different data categories, and a data handling classification covering AI use. Layer 2 (Accountability): named functional AI leads in each of the 6 departments where shadow AI had been identified, with explicit accountability in their role frameworks. Layer 3 (Controls): a shadow IT monitoring process to identify new AI tool installations, a mandatory AI onboarding module for all new starters, and a data handling checklist attached to the approved tool registry. Layer 4 (Audit): a quarterly review process led by internal audit, covering tool registry currency, policy compliance spot checks, and incident review. The result at the 90-day mark: a subsequent staff survey showed 91% of employees understood what AI tools they were permitted to use. 84% reported feeling confident about what they were and weren't permitted to do with AI. Shadow AI incidents in the following quarter dropped to zero detectable instances — not because shadow AI use became impossible, but because employees had a clear and functional legitimate alternative.
The firm's experience illustrates the most important principle of AI governance: the goal is not to prevent AI use, it's to make legitimate AI use the path of least resistance. When the approved tools are good, the policy is clear, and the escalation path is simple, the incentive for shadow AI disappears. The governance framework becomes an enabler of AI adoption, not a barrier to it.
The AI Governance Audit: A 30-Day Framework
Before designing a governance framework, you need to understand what you're governing. A 30-day AI governance audit gives you the baseline: what AI tools are currently in use, by whom, for what purposes, and what risks they create. In our experience, a governance audit typically surfaces 30-60% more AI tool usage than leadership expects to find. This is not evidence of unusual risk-taking by employees — it is evidence of normal human behaviour in the absence of clear guidance.
Week 1: Discovery. The discovery phase uses three inputs. First, a confidential staff survey asking employees which AI tools they use for work tasks, how frequently, and for what purposes. The survey should be explicitly non-punitive — employees must believe that honest answers won't create personal risk for them, or they'll underreport. Second, an IT review of all software installed on company devices and accessed through company networks, specifically looking for AI tools, AI features within existing platforms, and browser-based AI tools. Third, a desktop review of any existing policies or guidance that mention AI, to understand the current governance baseline (often: none). The output of week 1 is a raw inventory of AI tool use across the organisation.
Week 2: Risk Assessment. For each tool identified, assess against four questions: What data is it processing? Does it have a data processing agreement suitable for business use? Is its use consistent with your existing data protection obligations? What is the risk if the output is wrong and acted upon? This assessment maps each tool to one of the four risk categories described in Section 2 and produces a prioritised list of tools requiring immediate action (cease and replace), tools requiring review before continued use (assess and approve or reject), and tools that are likely approvable (approve and formalise).
Week 3: Policy and Tool Registry Drafting. Using the risk assessment outputs, draft the core governance documents: the AI usage policy (target: two pages), the approved tool registry (listing tools cleared for use, with data handling categories for each), and a data handling classification specific to AI use. These documents don't need to be perfect in week 3 — they need to be functional. A policy that is 80% right and published is more valuable than a policy that is 100% right and still in draft six months later.
Week 4: Communication and Accountability Assignment. Publish the policy. Communicate it to all employees with explicit framing: this is positive guidance, not punishment for past behaviour; these are the tools that are approved; this is what you do if you need a tool that isn't on the list; this is who to contact with questions. Simultaneously, assign the accountability structure: name the functional AI leads, brief them on their responsibilities, and establish the quarterly audit cadence. The output of week 4 is a functioning governance baseline. It is not the finished article — governance frameworks mature over time — but it is sufficient to address the immediate risks identified in the discovery phase and to create a legitimate alternative to shadow AI.
Getting Board-Level Buy-In for AI Governance
AI governance proposals often stall at the board level because they are framed as compliance exercises with costs and no obvious returns. This framing misses the strategic value of governance and makes board approval harder than it needs to be. The framing that works is risk management — specifically, the reduction of three types of risk that boards are already accountable for.
Regulatory risk. The EU AI Act, UK GDPR as applied to AI, and ICO guidance on AI processing create real legal obligations that are not going away. The cost of an enforcement action — in financial penalties, management time, and reputational damage — is almost certainly higher than the cost of building a governance framework. For a board accountable for legal and regulatory compliance, this framing is persuasive. The governance framework is the cost of staying inside the regulatory boundary. The alternative is running the organisation outside that boundary and hoping nobody notices.
Operational risk. A single AI governance incident — confidential client data processed through a public AI tool, an AI-generated report with material errors used in a client deliverable, an automated decision affecting an individual without proper oversight — can cause operational harm that is disproportionate to the underlying event. Boards understand that operational risk management requires visible controls. AI governance is the operational risk management framework for AI. Presenting it in those terms, with specific risk scenarios relevant to the organisation's sector, is more effective than presenting it as a technology or compliance matter.
Reputational risk. The reputation implications of AI governance failures are increasingly visible in public reporting. Organisations that have faced scrutiny for AI-related incidents — biased hiring algorithms, hallucinated client advice, exposed customer data — have experienced reputational consequences that lasted well beyond the operational incident. Boards are acutely sensitive to reputational risk. Presenting the governance framework as reputational risk protection, with specific examples from comparable organisations in your sector, gives board members a clear reason to act now rather than wait.
The board presentation for AI governance approval should contain four elements: the current state (what AI is being used now, what the known gaps are, what the risk assessment shows), the proposed framework (the AI Governance Stack in summary — Policy, Accountability, Controls, Audit), the resource requirement (typically: a project manager for 30 days to build the baseline, legal review of the policy, ongoing maintenance overhead of one person-day per month), and the risk cost of inaction (the specific regulatory, operational, and reputational risks of leaving the current position unchanged). This structure converts a compliance proposal into a risk management decision — which is the kind of decision boards are set up to make.
For leaders who need to build broader AI capability alongside governance, the companion guides on executive AI literacy and building an AI-ready workforce address the training and capability questions that sit alongside governance in a complete AI strategy.
Key Takeaways
- 68% of organisations have employees using unapproved AI tools for work tasks without IT or legal review — shadow AI is already widespread and almost certainly present in your organisation whether or not you know about it.
- The 4 AI risk categories requiring governance: operational risk (wrong outputs acted upon), data risk (confidential data exposed through AI tools), compliance risk (regulatory obligations breached), and reputational risk (AI use that harms organisational reputation if made public).
- An effective AI policy doesn't require 50 pages — a two-page usage policy with a curated approved tool registry is a sufficient and functional governance baseline that can be implemented in 30 days.
- The EU AI Act came into full force in 2025, creating legally binding obligations for high-risk AI applications including AI used in employment decisions, credit assessments, and customer service routing — with penalties of up to €35M or 7% of global turnover for prohibited applications.
- The AI Governance Stack provides a four-layer framework: Policy layer (usage rules and approved tools), Accountability layer (named owners in each function), Controls layer (technical and process controls), and Audit layer (periodic independent review).
- Getting board buy-in requires framing governance as risk management in three domains: regulatory risk (EU AI Act, GDPR), operational risk (AI output errors in client work), and reputational risk (public scrutiny of AI governance failures).
- A 30-day governance audit typically surfaces 30-60% more AI tool usage than leadership expects — the discovery phase is not a disciplinary exercise but a diagnostic that enables proportionate governance design.
- Shadow AI is not primarily a technology problem — it is a policy vacuum problem. When the approved tools are good, the policy is clear, and the escalation path is simple, the incentive for shadow AI largely disappears.