Governance for Participants, Not Tools
Governance for Participants, Not Tools
A previous piece argued the participant frame: AI is not a tool the operator wields; it is a participant in the system, with scope, context, and accountability properties of its own. The governance design has to follow.
The natural follow-up question — what changes in practice? — is the one I want to answer here.
Access control asks one question: is this principal allowed to perform this action? The principal is the human or service identity issuing the request. The action is the verb. The answer is yes or no. This is the question almost every enterprise governance framework is built around.
Agency tracking asks a different question: what is this participant doing, who set its scope, and does the current action sit inside that scope?
The participant is the AI system as an actor with a history.
Who set its scope is the provisioning step — the human or system that authorized the AI to exist with specific permissions and constraints.
Does the current action sit inside that scope is a question about coherence, not authorization.
Three design shifts follow.
Logging shifts from action records to agent records.
An action record says: at 14:32:07, identity X performed verb Y on resource Z. Useful for some purposes. Insufficient for governance of AI participants.
An agent record says: this agent was provisioned by human H, with scope S, at time T0; here is everything it has attempted to do since T0; here is what was permitted by S; here is what was attempted outside S; here is the full trajectory of its session.
The agent record is the unit of reconstruction. The action record is one frame inside it.
Audit reconstructs scope at execution time, not just events.
Audit trails record actions at the moment they happen. For governance of AI participants, the action is only part of the record. The other part is the scope the participant was provisioned with — set before the action, often by a different principal, in places the audit trail was not designed to reach.
For human actors, “what they were authorized to do” is approximated by their role at the time of action. Roles change slowly. Looking up the role at time T is usually possible.
For AI participants, scope is set at provisioning and can include constraints that live nowhere in IAM — system prompt, tool list, behavioral guardrails, session-level invocation context. These are application-layer constructs. They are rarely versioned. They are rarely persisted in a form that survives the next deploy.
The audit design has to capture provisioned scope alongside actions. Without it, “what was the AI authorized to do?” becomes unanswerable two deploys later.
Authorization binds to provisioning intent, not to runtime invocation.
In a tool-based governance model, authorization checks at runtime: does the calling identity have the right to perform this action right now? The check is verb-against-permission, evaluated at the moment of action.
In a participant-based model, authorization checks against the provisioning record: was this agent set up to do things in this category, by a principal with the standing to authorize that category, with a session that has not drifted from its declared scope? The check is action-against-intent, evaluated against a record that predates the action by hours or days.
The two checks are not substitutable. The runtime check passes for actions that should fail the intent check, and the reverse holds too. Most governance product surfaces today implement the first check. The second check is what the framework I’ve been developing was built around.
These three shifts are not framework recommendations. They are design requirements that follow from treating AI as a participant rather than a tool.
A governance system that does not produce agent records, reconstruct provisioned scope, or check against provisioning intent is governing tools. The AI systems being deployed today are not tools.
The naming was the easy part. The mechanism is what comes next.