Agentic AI and Dynamic Authorization: A Series Recap


Summary

I’ve been exploring how policy, delegation, and continuous authorization can make agentic AI systems useful without making them ungovernable. This post ties together six essays that trace that journey from foundational ideas to practical patterns.

Agents at Our Service

Over the past few months, I've been exploring a central question in agentic AI: how autonomous systems can be useful without becoming ungovernable. Taken together, these essays trace a progression from the basic insight that authorization is the hard problem, through practical patterns for putting policy inside the agent loop, shaping plans with constraints, governing actions with externalized policy, and finally thinking about delegation both within and across domains.

  • Why Authorization Is the Hard Problem in Agentic AI — This post lays the foundation for the whole series by arguing that static authorization breaks down in agentic systems. Agents act over time, under changing conditions, so authorization has to become dynamic, continuous, and policy-based, guiding behavior step by step rather than simply approving or denying a request once.
  • A Policy-Aware Agent Loop with Cedar and OpenClaw — This post moves authorization into the heart of the agent loop itself. Instead of treating access control as a one-time check, it shows how a Cedar-backed decision point can continuously evaluate tool use and feed results back into replanning, making authorization part of the agent's ongoing reasoning.
  • Beyond Denial: Using Policy Constraints to Guide OpenClaw Planning — Traditional authorization often acts as a gate that says "no" after an agent has already decided what to do. This essay explores a better pattern: giving the agent policy constraints up front so it can plan within allowed boundaries while still having every concrete action enforced at runtime.
  • Childproofing the Control Plane: Using Cedar to Build Frontal Lobes for Agentic Systems — Connecting an agent to real systems like Home Assistant makes the value of agentic AI obvious, but it also makes the risks obvious. This post explains how externalized, deterministic Cedar policies can provide the governance layer that keeps an agent useful while preventing it from crossing safety, security, and privacy boundaries.
  • Delegation as Data: Applying Cedar Policies to OpenClaw Subagents — Here I show how delegation inside a single domain can be modeled as data rather than hard-coded logic. In the OpenClaw and Cedar demo, a stable policy set evaluates dynamic delegation records so subagents can be given narrowly scoped authority without expanding the overall trust boundary.
  • Cross-Domain Delegation in a Society of Agents — This post argues that cross-domain delegation is about more than handing one agent a credential. For delegation to work across organizational boundaries, agents need policy-defined limits, promises that communicate intended behavior, and reputation that creates trust over time.

What ties these posts together is the idea that agentic AI needs governance that is as dynamic as the agents themselves. If we want agents that can plan, delegate, and act in the real world, then policy has to move from the edge of the system into its core, shaping behavior continuously and making autonomy governable rather than merely powerful.


Photo Credit: Agents at Our Service from ChatGPT (public domain)


Please leave comments using the Hypothes.is sidebar.

Last modified: Thu Mar 19 11:12:37 2026.