Foundational AI Concepts
Artificial Intelligence (AI) Software systems designed to perform tasks that typically require human intelligence, including reasoning, pattern recognition, decision-making and language understanding. In enterprise environments, AI systems range from analytical tools that generate insights to autonomous systems capable of executing actions across business processes. Generative AI A class of AI systems that produce outputs such as text, code, images or audio in response to prompts. These systems generate content rather than directly executing actions. Governance risks associated with generative AI primarily relate to accuracy, bias, hallucination and appropriate use. Large Language Model (LLM) A type of AI model trained on large-scale text data that can understand and generate human language. LLMs underpin most generative AI systems. On their own, they produce outputs. When connected to tools, APIs and enterprise systems, they enable action-taking behavior. AI Agent An AI system capable of perceiving context, making decisions and executing actions to achieve defined objectives. AI agents can call APIs, trigger workflows, access enterprise systems, initiate transactions and interact with other agents. The transition from passive AI to active agents introduces new governance requirements. Agentic AI AI systems operating in an autonomous or semi-autonomous mode, executing sequences of actions across systems with limited human intervention. Agentic AI introduces operational risk because actions have direct consequences within enterprise environments. Multi-Agent System An environment in which multiple AI agents interact, coordinate or delegate tasks. Governance complexity increases significantly as authority, intent and accountability must be traceable across multiple agents and execution paths. Autonomous Execution The capability of an AI system to initiate and complete actions without requiring human approval at each step. Autonomous execution requires strong governance controls to ensure actions remain authorized and constrained. Shadow AI AI tools, models or agents operating within an organization without formal approval, registration or governance. Shadow AI introduces significant risk because identity, authority and policy controls cannot be applied to systems that are not visible. Foundation Model A large, pre-trained AI model that serves as the base for downstream applications and agents. Foundation models are an input to governance but are not themselves a governance mechanism. Prompt Injection An attack technique in which malicious instructions are embedded in data processed by an AI system, causing it to deviate from its intended behavior. In agentic systems, prompt injection can lead to unauthorized actions being executed. Action Governance provides the enforcement layer that can detect and block such actions before they take effect.Governance Concepts
AI Governance The policies, processes, controls and infrastructure required to ensure AI systems operate safely, reliably and in alignment with regulatory and organizational requirements. Traditional AI governance focuses on models and data. Modern governance must extend to action-level control. Model Governance The governance domain responsible for ensuring AI models are safe, reliable and fit for purpose. This includes training data quality, bias mitigation, performance evaluation, hallucination management and model lifecycle controls. Model Governance is a prerequisite for deployment but does not control execution. System Governance The governance domain responsible for ensuring AI systems interact securely with enterprise infrastructure. This includes data access controls, API governance, system integrations and vendor risk management. System Governance determines what systems AI can access, but not what actions it may execute. Action Governance The governance domain that determines whether a specific AI action is authorized to execute, under what authority and within what constraints. Action Governance operates at the point of execution, after access has been granted and before the action takes effect. It is the primary control point for agentic AI systems and the layer that existing governance frameworks do not define. Runtime Governance The enforcement layer that operates continuously while AI systems are running in production. Runtime Governance evaluates each requested action in real time and returns a decision to allow, constrain or block execution based on identity, authority, intent and policy. Execution Control The capability to enforce governance decisions at the exact moment an AI system attempts to execute an action. Execution control translates governance policy into operational enforcement. Governance Lifecycle The continuous process of governing AI systems across their lifecycle: Discover, Evaluate, Deploy, Operate and Evolve. Governance is not a one-time activity and must adapt as systems and risks change. Human-in-the-Loop A governance mechanism requiring human review or approval before certain AI actions are executed. This is typically applied to high-risk or sensitive operations where the consequences of an error are significant. AI Risk Classification A structured approach to categorizing AI systems based on the level of authority they are granted and the potential impact of their actions. Risk classification determines the level of governance required. The Enterprise AI Governance Framework defines four tiers: Low Risk (advisory), Operational Risk (workflow influence), High Risk (system execution) and Critical Risk (autonomous execution). Audit Trail A tamper-resistant, verifiable record of AI actions, decisions and outcomes. A governance-grade audit trail must capture who initiated an action, which agent executed it, under what authority and within what constraints - and must be cryptographically verifiable and portable, meaning it can be produced on demand to regulators, auditors or internal governance teams. Access logs are not sufficient; an audit trail must record not just what happened but what authority permitted it.Identity and Trust
Identity The verifiable identity of an actor performing an action. Actors may include humans, organizations, AI agents and machines. Identity is the foundation of accountability and the first primitive in the trust stack. Know Your Agent (KYA) The principle that organizations must be able to identify, verify and account for every AI agent operating within their environment - in the same way that financial institutions are required to identify their customers. KYA requires that agents carry registered, cryptographically verifiable identities linked to accountable human or organizational principals. Without KYA, authority cannot be delegated, policy cannot be enforced and governance cannot be applied. Identity and Access Management (IAM) The systems and processes used to authenticate identities and control access to resources. IAM determines who can access systems but does not determine whether specific actions should be executed once access is granted. IAM is necessary but not sufficient for governing autonomous AI. Delegated Authority A mechanism by which a human or organizational principal grants an AI agent permission to act on their behalf within defined scope and constraints. Delegated authority must be explicit, limited and revocable. Delegation Chain The traceable sequence of authority from a human or organizational principal to an AI agent. Each step in the chain must be verifiable to ensure accountability. If any link cannot be proven, the action cannot be considered authorized. Verifiable Credentials Cryptographically signed digital credentials that allow an entity to prove identity, permissions or attributes without relying on a centralized authority. Cryptographic Proof A mathematical method for proving that a claim is valid without revealing the underlying data. In AI governance, this enables verification of identity, authority and execution without exposing sensitive information. Decentralized Identity An identity model where individuals or entities control their own credentials rather than relying on centralized identity providers. Decentralized Identifiers (DIDs) A W3C standard for creating unique, verifiable and decentralized digital identifiers. Selective Disclosure A technique that allows specific attributes of a credential to be shared without exposing the full credential.The Trust Stack
Trust Stack The ordered set of primitives required to authorize an AI action: Identity, Authority, Intent and Action. Each layer must be satisfied for an action to be considered authorized. If any link is missing, the action cannot be governed - only observed after the fact. Authority The right to perform a specific action within defined limits. Authority must be explicitly granted, enforceable at runtime and revocable. Intent The declared purpose and scope of an action request. Intent provides context for evaluating whether an action is permissible under delegated authority and policy constraints. Consent An explicit, time-bound authorization provided by a principal allowing specific actions to be performed on their behalf. Policy The rules and constraints that define how and when actions may be executed, enforced consistently across systems at the point of execution. Enforcement The runtime application of policy to determine whether an action is allowed, constrained or blocked. Verification The process of proving that an action was authorized and executed within defined constraints. Audit The structured record of governance decisions and execution outcomes, enabling accountability and compliance.Enterprise and Security Terms
AI Security The discipline focused on protecting AI systems from threats such as prompt injection, data poisoning and unauthorized manipulation. Agent Identity A unique, verifiable identity assigned to an AI agent, enabling accountability and policy enforcement. Machine Identity Digital identities assigned to non-human entities such as services, workloads and devices. Non-Human Identity (NHI) A category that includes machine identities and AI agents, representing all non-human actors in a system. Zero Trust Architecture A security model based on continuous verification of identity and context rather than implicit trust. Policy Decision Point (PDP) A component that evaluates whether an action should be allowed based on policy. Policy Enforcement Point (PEP) A component that enforces the decision made by the Policy Decision Point. Least Privilege The principle of granting only the minimum level of access required to perform a task. Access Control Mechanisms used to restrict access to systems and data. Authorization The process of determining whether an action is permitted. Authentication The process of verifying the identity of an actor.Regulatory and Standards Context
EU AI Act A European regulatory framework that classifies AI systems by risk and imposes requirements on high-risk systems, including accountability and auditability. Non-compliance carries fines of up to €30M or 6% of global annual turnover. The Enterprise AI Governance Framework addresses the compliance challenge the Act creates but does not solve: how to demonstrate, on demand, that an AI system’s actions were authorized and within defined parameters at the time of execution. NIST AI Risk Management Framework (AI RMF) A framework providing guidance for identifying, assessing and managing risks associated with AI systems, organized around four core functions: Govern, Map, Measure and Manage. ISO 42001 An international standard for managing AI systems and governance processes. SOC 2 An auditing standard assessing controls related to security, availability and data protection. Regulatory Accountability The obligation to demonstrate that AI systems operated within authorized parameters and that verifiable evidence can be produced on demand. Regulatory accountability requires governance infrastructure that generates cryptographic proof at the point of execution - not retrospective analysis of access logs. The question regulators are increasingly asking is not whether an AI system had access, but whether each action it took was authorized, constrained and evidenced.This glossary forms part of the Enterprise AI Governance Framework developed by Nuggets Labs to help organizations safely deploy AI systems that execute actions in production environments, ensuring those actions are authorized, constrained and verifiable at the point of execution.