Trust Travels With Actions
In autonomous AI systems, trust is not established at login. It is established at the moment an action is taken. Traditional security models assume that once access is granted, subsequent actions can be implicitly trusted. This assumption breaks down when AI agents act continuously, across tools, clouds, and organisations, often without direct human supervision. For autonomous systems to operate safely in production, trust must travel with every action. This means being able to prove - after execution - that a specific action was taken by an authorised actor, on whose behalf it acted, under which policy, with declared intent and valid consent. The concepts below define the trust model Nuggets applies to make autonomous actions provable, auditable, and compliant.Actions Are the Unit of Trust
An action is the fundamental unit of trust evaluation. Actions include any operation that has an effect beyond the agent itself, such as:- Invoking an API
- Accessing or modifying data
- Executing a transaction
- Triggering downstream systems
- Initiating interactions with other agents
Actor Identity
Actor identity establishes who or what is taking an action. Nuggets supports verifiable identities for all actors involved in autonomous systems:- Humans
- Organisations
- Machines and services
- AI agents
Authority
Authority determines whether an actor is permitted to take a specific action. In autonomous systems, authority is not static. It is contextual, time-bound, and constrained by policy. Authority is evaluated based on:- The identity of the acting entity
- The principal on whose behalf it acts
- The declared intent of the action
- Applicable policies and constraints
- Whether valid consent exists
Policy
Policies define the rules under which actions are allowed to occur. Unlike traditional access policies that are enforced only at system boundaries, Nuggets policies travel with actions across tools, clouds, and organisations. Policies may impose constraints related to:- Scope of action
- Data usage
- Jurisdiction
- Risk thresholds
- Regulatory requirements
Intent
Intent is a declared description of why an action is being taken and what outcome is expected. By binding intent to actions, autonomous systems become transparent and accountable. Intent enables organisations to:- Distinguish permitted behaviour from misuse
- Detect policy violations
- Justify outcomes during audits and investigations
Consent
Consent defines the conditions under which actions may occur. In autonomous systems, consent must be:- Explicit
- Verifiable
- Enforceable across system boundaries
Provenance and Accountability
Provenance establishes a complete, verifiable record of what occurred. For every evaluated action, Nuggets enables capture of:- The acting entity
- The principal represented
- The declared intent
- The policies evaluated
- The outcome of the action