Let’s start with the fundamentals. Authorization is the process of determining what you’re allowed to do after the system knows who you are. Think of it this way: authentication is showing your ID at the door, proving you are who you say you are. Authorization is what happens next: the bouncer checking the list to see if you can access the VIP section, the main floor, or just the coat check.
At its core, authorization protects objects (files, databases, APIs, services) from unauthorized operations (reading, writing, deleting, executing). It’s the invisible security perimeter around every digital interaction you have. When you share a photo album with family, edit a collaborative document, or access your bank account, authorization is working behind the scenes, making split-second decisions about whether to grant or deny your request.
And here’s the thing: if authorization fails, everything fails. Get it wrong, and you either lock out legitimate users (denial of service) or, far worse, grant access to people who shouldn’t have it (security breach). It’s the foundation of digital trust.
For decades, organizations relied on a few tried-and-true approaches to access control. Let’s walk through them.
The simplest model: Access Control Lists (ACLs). You literally maintain a list of who can access what. Alice can read Document A. Bob can edit Document B. Carol can delete Document C. Does your username match an entry on the list? You’re in. Doesn’t match? Denied.
This works beautifully for small systems, maybe 10 users, 10 files. But scale it to an enterprise with thousands of employees and millions of resources? It becomes an administrative nightmare. You’re forced to manage privileges individually for every single person accessing every single object. And here’s the killer: it’s completely static. When something changes like someone gets promoted, leaves the company, finishes a project, you have to manually update every single ACL they’re on.
The result? What security folks call “privilege creep.” Users accumulate more and more access over time because it’s just too cumbersome to revoke everything when they change roles. They end up with way more authority than their current job requires, creating a massive attack surface just waiting to be exploited.
RBAC was the industry’s answer to ACL chaos, and it was rightly hailed as a major leap forward. Instead of managing individual permissions, you define roles that mirror your organizational structure: “Marketing Manager,” “Senior Engineer,” “Financial Analyst” and assign bundles of permissions to those roles.
When someone joins as a Senior Engineer, they automatically inherit all the permissions that role carries. When they leave or transfer, you update their role assignment, and the system handles the complex recalculation of their effective permissions. You’re managing 10 roles instead of 10,000 individual user permissions. It’s elegant, it scales, and it mirrors how organizations actually work.
But here’s where RBAC starts to crack: it struggles with nuance. Roles are typically static, mirroring the org chart. What happens when you need a policy that says, “Only supervisors who’ve completed mandatory ethics training, logging in from a corporate VPN connection after 6 PM, can approve payments over $50,000”?
With pure RBAC, you’d have to create a new role: “VPN-Ethics-Trained-After-Hours-Supervisor-Over-50K-Approver.” This is called “role explosion,” and it’s absurd. You end up creating so many granular, one-off roles that you’ve just replaced one administrative nightmare with another.
Before we dive into more sophisticated models, we should talk about Policy-Based Access Control, which takes a different approach: expressing authorization decisions as explicit, formal policies that can be centrally managed and evaluated.
Think of PBAC as taking all those implicit rules scattered across your organization , “contractors can’t access financial data,” “documents can only be downloaded during business hours,” “approval requires manager sign-off” and codifying them into a machine-readable policy language.
The most prominent example is XACML (eXtensible Access Control Markup Language), which provides a standardized way to write these policies. A policy might look like: “IF subject.role = ‘physician’ AND subject.department = resource.owningdepartment AND action = ‘write’ THEN permit WITH obligation(sendemailtopatient).”
Where PBAC Shines:
Why PBAC Alone Isn’t Enough for AI Agents:
The challenge with pure PBAC for AI agents is that it’s fundamentally about evaluating decisions, not about structuring the authorization model itself. PBAC is brilliant at saying “here’s how to decide,” but it doesn’t tell you what data model to use (roles? attributes? relationships?) or how to handle the dynamic, compositional nature of agent requests.
In practice, PBAC works best as a layer on top of another model:
For AI agents specifically, PBAC’s rigid policy evaluation can create bottlenecks. If every agent action requires complex policy evaluation with network round-trips to attribute sources, you’re introducing latency that destroys the agent’s responsiveness. Additionally, policies need to be written ahead of time — but agents often need to perform novel combinations of actions that policy writers never anticipated.
That said, PBAC’s strength is in governance. For high-stakes agent decisions: “can this agent approve a $100K purchase?” or “can this agent access customer PII?” having explicit, auditable policies is non-negotiable. The key is combining PBAC’s governance strengths with more flexible models for the agent’s routine operations.
Then came OAuth 2.0, which solved a different but equally critical problem: the password anti-pattern. Before OAuth, if a third-party photo printing service needed to access your photos stored on another platform, you had to give them your username and password — your master key to everything.
The problems were brutal:
OAuth introduced the concept of delegation through limited, temporary access tokens. Instead of handing over your password, you authenticate directly with the trusted service, which then issues the third-party app a token that says: “This app can read only the photos tagged ‘print’ for the next hour.” The token is scoped (limited permissions), time-bound (expires), and revocable (you can cancel it anytime).
This separation: the authorization layer sitting between the client app and your resources, was revolutionary. It’s why you can safely click “Sign in with Google” without worrying about giving every website your Google password.
Now we’re in a new world. AI agents aren’t just reading files or executing predetermined functions: they’re making decisions, taking actions, and operating with a level of autonomy we’ve never dealt with before.
Traditional authorization models are fundamentally static. Even OAuth, brilliant as it is, assumes you can define the scope of access upfront. You know what the photo printing service needs (read photos), so you grant exactly that permission.
But what happens when an AI agent needs to:
Each of those actions touches different resources (email, calendar, facilities system, drive, payment system). The agent doesn’t know ahead of time exactly which resources it’ll need, that depends on the context of your request and what it discovers along the way.
Right now, most AI agent authorization falls into what I call the “all-or-nothing trap.” Either:
**Option 1: The Agent Gets Everything \ You grant the agent broad, sweeping permissions, essentially your own authority level. It can read your email, access your files, make API calls on your behalf, execute commands. This is terrifyingly insecure. If the agent has a bug, gets prompt-injected, or its credentials leak, an attacker now has full run of your digital kingdom. Remember that Solitaire example? The simple card game running with the authority to delete your entire hard drive? Same problem, but now the “card game” is a semi-autonomous AI that might hallucinate its instructions.
**Option 2: The Agent Gets (Almost) Nothing \ You lock it down with minimal permissions, forcing it to request approval for every single action. This destroys the entire value proposition. Why have an autonomous agent if you have to babysit every API call? The friction is so high that users either abandon the agent or, worse, override the restrictions to “get things done,” creating shadow security risks.
Neither option is tenable for the long term.
This is where things get interesting. We need authorization systems that are as dynamic and context-aware as the agents themselves. Let’s look at what’s emerging.
ABAC is where modern authorization starts to match the sophistication of AI agents. Instead of asking “Is this user a manager?” (RBAC), ABAC asks: “Does this request satisfy a complex, multi-factor policy based on attributes of the subject, the resource, the action, and the environment?”
Here’s how it works for our earlier example: nurse Nancy trying to access medical records:
The policy says: “IF subject.role = ‘nurse practitioner’ AND subject.department = resource.owning_department THEN permit view.”
The magic? When Nancy transfers to oncology next month, you don’t touch the access rule. You don’t touch the patient records. You just update her employee file, her subject attributes. The next time she tries accessing cardiology records, the policy evaluation fails automatically. She can now access oncology records instead, because that’s where her department attribute now points.
Pros of ABAC for AI Agents:
Cons of ABAC for AI Agents:
This is where things get really interesting for complex, interconnected systems. ReBAC, pioneered by Google in their Zanzibar system, takes a fundamentally different approach: it models authorization as a graph of relationships between users, resources, and groups.
Instead of asking “Does this user have permission?” or “Does this user have the right attributes?”, ReBAC asks: “Is there a path in the relationship graph that connects this user to this resource?”
Here’s how it works. Instead of storing giant ACLs or evaluating complex attribute policies, ReBAC models access as simple relationship tuples:
doc:proposal#viewer@user:alice (Alice can view the proposal)doc:proposal#viewer@group:eng#member (Anyone who's a member of the eng group can view the proposal)folder:secret#viewer@doc:proposal#parent (The proposal's viewers inherit from its parent folder)The beauty is in how these relationships compose. You can express: “The viewers of this document are anyone who is an editor of this document OR anyone who is a member of the parent folder’s viewer group OR anyone in a group that has been granted viewer access.”
For AI agents operating in complex organizational structures, this is incredibly powerful. The agent needs to:
Google’s Zanzibar implementation handles trillions of these relationship tuples and processes millions of permission checks per second with sub-10-millisecond latency. That’s the scale needed when your AI agent might need to check permissions across Drive, Gmail, Calendar, and YouTube simultaneously.
The key innovation for agents is what Zanzibar calls consistency tokens (Zookies). Before an agent modifies content — say, editing a document — it requests a Zookie from the authorization system. That Zookie encodes a timestamp. When the agent later accesses that content, it sends the Zookie back, essentially saying: “Evaluate my permissions using a snapshot at least as fresh as this timestamp.”
This guarantees the agent never sees stale permissions relative to the content it’s accessing. If you remove Bob’s access at time T1, then share new confidential content at time T2, the system ensures the T1 action is processed before T2 — even across different continents and services. This prevents the “new enemy problem” where authorization lags dangerously behind reality.
Pros of ReBAC for AI Agents:
Cons of ReBAC for AI Agents:
This is where things get really elegant for AI agents. Instead of asking “Does this agent have permission?”, you make the permission itself the unforgeable token the agent must possess.
Think of it this way: Traditional systems are name-centric. You say “access file foo.txt,” and the system looks you up in some global directory to see if you’re allowed. Capability systems are key-centric. You present a cryptographic token (the capability) that is both the designation (what you’re accessing) and the authority (the right to access it) bundled together.
When you want to give an AI agent access to your Carol resource, you don’t update some central ACL. You create a small security-enforcing program like a caretaker that sits in front of Carol. The caretaker gives you two things back:
When the agent sends messages to Carol2, the caretaker checks an internal switch. If it’s on, messages get forwarded to Carol. If it’s off, they’re dropped. The beautiful part? The agent’s permission to talk to Carol2 never changed, it still has that reference. But your ability to cut off its authority is instant. You call the revocation gate’s disable method, and the agent’s access to Carol evaporates, even though it still “has the key.”
Pros for AI Agents:
Cons for AI Agents:
The future for AI agent authorization isn’t picking one model, it’s thoughtfully combining them.
Imagine an architecture where:
The policy might look like:
IF agent.purpose = "meeting_scheduler" AND user.has_attribute("calendar_delegation_approved") AND relationship_exists(user, "member", "scheduling_team") AND resource.classification <= user.clearance_level AND environment.time IN business_hours AND environment.network = "corporate_vpn" THEN grant_capability(calendar.read, calendar.write) WITH obligation(audit_log.record(agent_id, action, timestamp))
The agent gets a capability (unforgeable reference) but only after the ABAC policy is satisfied, the ReBAC relationship is verified, the OAuth-style time constraints are met, and only with mandatory audit obligations enforced.
If you’re building with AI agents today:
Don’t default to giving the agent your full authority. That’s the Solitaire-running-as-admin antipattern all over again. Instead:
Think beyond roles; think relationships and attributes. Agent needs change request-by-request. Static roles won’t cut it. Design your policies around:
Embrace ReBAC for organizational complexity. If your agents need to navigate team structures, folder hierarchies, or delegation chains, relationship graphs are your friend. They naturally model how organizations actually work.
Plan for dynamic trust chains. Your authorization system isn’t just checking one thing anymore. It’s coordinating attribute authorities (HR, security, compliance), relationship graphs, resource metadata, and environmental sensors. Document your trust assumptions. Know where your data comes from and how fresh it is.
Use PBAC for governance, not routine operations. Explicit policies are essential for auditing high-stakes decisions. But don’t make every agent action go through heavyweight policy evaluation — you’ll kill performance. Reserve PBAC for the decisions that matter most.
Embrace the “before-the-fact audit” challenge. Yes, it’s hard to answer “who can access X?” in an ABAC or ReBAC world. But that’s the price of dynamic, context-aware security. Invest in simulation tools that can answer “who could access X under what conditions?” Build audit trails that capture not just “what happened” but “why was it allowed?”
Authorization for AI agents isn’t a solved problem, it’s an evolving one. The traditional models gave us important pieces: OAuth taught us delegation and scoping, RBAC taught us organizational structure, PBAC taught us explicit governance, ABAC taught us contextual decisions, ReBAC taught us relationship composition, and capabilities taught us unforgeable authority.
The challenge now is composing these pieces thoughtfully. AI agents are too powerful and too autonomous to rely on all-or-nothing access. We need authorization systems that match their dynamic nature, granting just enough authority, just in time, with continuous verification and effortless revocation.
The good news? The building blocks exist. Google has proven ReBAC works at planetary scale. ABAC systems handle millions of contextual decisions daily. Capability-based systems provide mathematical guarantees about confinement. The hard part is architecting them together in ways that serve both security and usability.
Because at the end of the day, the goal isn’t to lock everything down or throw the doors wide open.
It’s to enable the right agent to do the right thing at the right time and nothing more.
\


