April 14, 2026


The Governance Gap Nobody Is Talking About
Enterprise security teams have spent years building identity and access management infrastructure. IAM is essential. But IAM was designed for a world where humans log in, perform actions, and log out. It governs the moment of access. It was never designed to govern the moment of execution. As AI agents move into production - querying databases, triggering workflows, moving money, modifying records - the relevant control point shifts. Knowing who an agent is, and which systems it can reach, does not tell you whether a specific action it is about to take is authorised to proceed. This is the AI governance gap. And it is now a regulatory and audit liability. The EU AI Act’s high-risk AI obligations take effect in August 2026. The Colorado AI Act becomes enforceable in June. Regulators, auditors and boards are beginning to ask a question most organisations cannot currently answer: how do you know the AI action was authorised?What Action Governance Is - And Why It Matters
The framework introduces Action Governance as a distinct control layer - one that sits between access and execution, and answers a different question to IAM. Traditional access control asks: can this actor reach this system? Action Governance asks: should this specific action - by this actor, under this authority, within these constraints - be permitted to execute right now? These are not the same question. An AI agent can have legitimate, fully credentialled access to a financial platform and still execute a transaction it was never authorised to make. IAM permitted it into the system. Only Action Governance can stop it at the moment of action. Action Governance is not a monitoring layer. It is an execution control point - evaluated before the action takes effect, not observed after.The Trust Stack
At the centre of the framework is a four-primitive trust stack that defines the chain of responsibility every governed AI action must satisfy: Identity → Authority → Intent → Action Identity - every actor, whether human, organisation, AI agent or machine, must have a cryptographically verifiable identity linked to an accountable principal. Authority - identity alone is not enough. The system must validate what authority has actually been delegated. A user permitting an AI agent to summarise documents has not automatically authorised it to query production databases. Authority must be explicit, scoped and revocable. Intent - the declared purpose of the action must be bound to its execution. When what an agent does diverges from what it was instructed to do, the governance layer must be able to identify and act on that divergence. Action - only if identity, authority and intent are all satisfied does the action proceed - and when it does, it produces cryptographic proof that it was authorised and executed within defined constraints. If any primitive is missing, the action cannot be governed - only observed after the fact.Three Governance Domains
The framework defines three domains that together cover the full scope of enterprise AI governance: Model Governance ensures AI models are safe, reliable and aligned with intended use. It is a pre-condition for safe AI deployment. It is not the control point. System Governance ensures AI systems interact safely with enterprise infrastructure, establishing the boundaries of what they can reach. It is also not the control point. Action Governance is the control point. It determines whether AI actions are authorised to execute, under what authority and constraints, and whether they should proceed, be constrained or be blocked. This is the domain that existing AI governance frameworks do not address - and the one that regulated enterprises most urgently need.AI Risk Classification
Not all AI systems carry the same governance burden. The framework classifies deployments across four risk tiers based on what systems are authorised to do, not how capable their underlying models are: Low Risk - Advisory Systems: AI that generates insights or recommendations but does not execute actions. Governance focuses on accuracy and bias. Operational Risk - Workflow Influence: AI that influences processes but does not directly execute actions. Human control typically remains in the loop. High Risk - System Execution: AI capable of executing actions within enterprise systems - modifying data, triggering workflows, interacting with operational platforms. Requires Action Governance at the point of execution. Critical Risk - Autonomous Execution: AI capable of initiating transactions, controlling infrastructure or operating entirely without human intervention. Requires continuous runtime governance, with the ability to intervene or revoke in real time. Most enterprise AI deployments in 2026 are entering the High Risk and Critical Risk tiers without the governance infrastructure those tiers require. That is the gap the framework addresses.Regulatory Alignment
The Enterprise AI Governance Framework maps to the three primary regulatory and standards frameworks enterprise teams are working against: NIST AI Risk Management Framework - the framework covers all four NIST AI RMF core functions and extends the Manage function with Action Governance, the execution control layer NIST does not currently define. EU AI Act - the framework directly addresses the compliance challenge the Act creates but does not solve: how to demonstrate, on demand, that a high-risk AI system’s actions were authorised, constrained and within defined parameters at the time of execution. ISO AI Governance Standards - the framework aligns across all four ISO governance domains and extends each with the execution control layer that determines whether AI actions are permitted to proceed.Eight Infrastructure Primitives
Implementing Action Governance requires foundational infrastructure. The framework defines eight primitives: Identity, Authority, Intent, Consent, Policy, Enforcement, Verification and Audit. Consent deserves particular attention. Unlike most AI governance models, this framework treats consent as a first-class governance primitive — a signed, auditable artifact that specifies what actions are permitted and on whose behalf. In financial services, healthcare and regulated data environments, the basis for an AI action must be provable, not just logged.Where to Start
The framework concludes with a practical three-step starting point: Classify. Map active AI deployments against the risk tiers. Identify which systems are already operating at High or Critical tier - systems that execute actions, not just generate outputs. Gap-assess. For each High or Critical system, ask three questions: can you verify the identity of the AI actor? Can you prove the authority it was operating under? Can you produce tamper-resistant audit evidence on demand? If the answer to any is no, that is your governance gap. Prioritise. Establish identity and delegated authority controls for your highest-risk systems first. IAM is the foundation. Action Governance is the next layer. Both are required. Start where the exposure is greatest.Download the Framework
The Enterprise AI Governance Framework is freely available - no sign-up required. It includes the full governance model, risk classification tiers, defined roles, infrastructure primitives, and 18 procurement questions for evaluating AI system vendors. Download the Enterprise AI Governance Framework →Nuggets is the trust infrastructure for AI actions. If your organisation is deploying AI at High or Critical risk tier and needs to close the governance gap, we’re happy to talk.