RBAC: Who Can Do What
- Additive Only: Kubernetes RBAC has no "deny" rules. Permissions start at zero and are granted through Roles. If two roles conflict, the more permissive one wins.
- Subjects: Three types of identities can be granted permissions -- Users (managed externally), Groups (from your identity provider), and ServiceAccounts (managed by Kubernetes for pod-to-API-server communication).
- Scope: Roles grant permissions within a single namespace. ClusterRoles grant permissions cluster-wide or on non-namespaced resources (nodes, persistent volumes, namespaces themselves).
- Bindings: A RoleBinding connects a subject to a Role within a namespace. A ClusterRoleBinding connects a subject to a ClusterRole across the entire cluster. You can also use a RoleBinding to reference a ClusterRole, scoping it to a single namespace.
- Default ClusterRoles: Kubernetes ships with
cluster-admin,admin,edit, andviewClusterRoles. Use these as building blocks rather than creating everything from scratch. - ServiceAccounts: Every pod runs as a ServiceAccount. The default ServiceAccount has minimal permissions, but any additional API access must be explicitly granted via RBAC.
Kubernetes security follows the Principle of Least Privilege: grant only the minimum permissions required, and nothing more.
0. The "Default Deny" Philosophy
In Kubernetes, permissions are additive only. There are no "Deny" rules.
- Everything starts with zero permissions.
- You add "Allow" rules via Roles.
- If a user has two Roles, and one allows "Delete" while the other does not mention it, the user can delete. Permissions are the union of all granted rules.
Rule of Thumb: If you do not explicitly grant it, it is forbidden.
1. Namespaces: The Virtual Cluster
A Namespace allows you to partition a single physical cluster into multiple virtual clusters. Namespaces are the primary boundary for RBAC scope.
Why Use Them?
- Isolation: Separate
dev,staging, andprodenvironments within the same cluster. - Access Control: Give "Team A" full control over
namespace-abut no visibility intonamespace-b. - Resource Quotas: Limit how much CPU and memory a namespace can consume (see Resources & HPA).
- Network Policies: Restrict which pods can communicate across namespaces.
Common Namespaces
default-- Where resources go if you do not specify a namespace. Avoid using this for production workloads.kube-system-- System components (API server, CoreDNS, kube-proxy). Do not deploy application workloads here.kube-public-- Auto-readable by everyone, including unauthenticated users. Contains the cluster-info ConfigMap. Rarely used for anything else.kube-node-lease-- Contains Lease objects for node heartbeats. Managed by the system.
Creating and Using Namespaces
apiVersion: v1
kind: Namespace
metadata:
name: team-frontend
labels:
team: frontend
environment: production
# Create the namespace
kubectl apply -f namespace.yaml
# List all namespaces
kubectl get namespaces
# Set your default namespace for kubectl
kubectl config set-context --current --namespace=team-frontend
2. Role-Based Access Control (RBAC)
Security in Kubernetes is deny-by-default. No one -- user or pod -- can do anything unless they are explicitly granted permission.
Interactive RBAC Graph
Visualize the relationship: Subject (User) -> RoleBinding -> Role -> Permissions.
Role vs ClusterRole
This distinction is one of the most common sources of confusion.
| Feature | Role | ClusterRole |
|---|---|---|
| Scope | Single namespace | Entire cluster |
| Binding | RoleBinding | ClusterRoleBinding (or RoleBinding for namespace-scoped grant) |
| Manages | Namespaced resources (pods, services, deployments) | Cluster-wide resources (nodes, PVs, namespaces) + namespaced resources |
| Use Case | App developer, team member | Cluster admin, CI/CD pipeline, monitoring system |
Role YAML Example
A Role that allows a developer to manage pods and services within a namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: team-frontend # This Role only applies in this namespace
rules:
# Allow full CRUD on pods
- apiGroups: [""] # "" is the core API group
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
# Allow managing services and deployments
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Allow read-only access to configmaps and secrets
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
ClusterRole YAML Example
A ClusterRole that allows a monitoring system to read pods and nodes across all namespaces:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-reader
# No namespace -- ClusterRoles are not namespaced
rules:
# Read pods in all namespaces
- apiGroups: [""]
resources: ["pods", "nodes", "services", "endpoints"]
verbs: ["get", "list", "watch"]
# Read metrics
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list"]
# Read non-namespaced resources
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
3. Bindings: Connecting Subjects to Roles
Roles are useless on their own. A Binding connects a subject (User, Group, or ServiceAccount) to a Role.
RoleBinding (Namespace-Scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-developer
namespace: team-frontend
subjects:
- kind: User
name: alice # Name from your identity provider
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer # References the Role created above
apiGroup: rbac.authorization.k8s.io
Result: Alice can manage pods, services, and deployments in team-frontend, but has no access to any other namespace.
ClusterRoleBinding (Cluster-Scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-global
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring # ServiceAccounts are namespaced
roleRef:
kind: ClusterRole
name: monitoring-reader
apiGroup: rbac.authorization.k8s.io
Result: The prometheus ServiceAccount in the monitoring namespace can read pods and nodes across every namespace.
RoleBinding Referencing a ClusterRole
A powerful pattern: use a ClusterRole (which defines permissions for common use cases) but scope it to a single namespace using a RoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bob-view-only
namespace: team-backend
subjects:
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view # Built-in ClusterRole
apiGroup: rbac.authorization.k8s.io
Result: Bob has read-only access to everything in team-backend, but nothing else. The view ClusterRole is reused without duplication.
4. ServiceAccounts for Pods
Every pod runs as a ServiceAccount. If you do not specify one, it uses the default ServiceAccount in the pod's namespace.
Why Create Custom ServiceAccounts?
The default ServiceAccount has minimal permissions. If your application needs to interact with the Kubernetes API (e.g., a controller that lists pods, a CI/CD tool that creates deployments), you must:
- Create a ServiceAccount.
- Create a Role or ClusterRole with the necessary permissions.
- Bind them together.
- Assign the ServiceAccount to your pod.
# Step 1: Create a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployment-manager
namespace: ci-cd
---
# Step 2: Create a Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deploy-role
namespace: team-frontend
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "update", "patch"]
---
# Step 3: Bind them
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ci-deploy-binding
namespace: team-frontend
subjects:
- kind: ServiceAccount
name: deployment-manager
namespace: ci-cd # Cross-namespace reference
roleRef:
kind: Role
name: deploy-role
apiGroup: rbac.authorization.k8s.io
# Step 4: Use the ServiceAccount in a Pod
apiVersion: v1
kind: Pod
metadata:
name: deploy-bot
namespace: ci-cd
spec:
serviceAccountName: deployment-manager # Run as this identity
automountServiceAccountToken: true # Mount the API token
containers:
- name: deployer
image: bitnami/kubectl:latest
command: ["kubectl", "rollout", "restart", "deployment/web-app", "-n", "team-frontend"]
Disabling Auto-Mounted Tokens
By default, Kubernetes mounts an API token into every pod. If your application does not need Kubernetes API access, disable this to reduce the attack surface:
spec:
automountServiceAccountToken: false
Bound Service Account Token Volume Projection
Since Kubernetes 1.22, ServiceAccount tokens mounted into Pods are bound tokens by default. Unlike legacy tokens, bound tokens are:
- Time-limited: Expire after 1 hour by default (automatically rotated by the kubelet before expiry).
- Audience-scoped: Issued for a specific audience (the API server), preventing token reuse against other services.
- Bound to the Pod: Invalidated when the Pod is deleted, preventing stolen tokens from being used after the workload is gone.
This is a significant security improvement over the legacy model, where ServiceAccount tokens were stored as Secrets with no expiration.
If your cluster was upgraded from an older version, you may still have legacy ServiceAccount token Secrets that never expire. Audit your cluster for these:
# Find all legacy SA token secrets (these have no expiration)
kubectl get secrets --all-namespaces --field-selector type=kubernetes.io/service-account-token
# For each one, check if it's still in use before removing
kubectl get pods --all-namespaces -o json | grep -l "<secret-name>"
Migration: Replace legacy token Secrets with the TokenRequest API or projected volume mounts. New workloads automatically use bound tokens — no action needed for newly created Pods.
5. Default ClusterRoles
Kubernetes ships with four built-in ClusterRoles that cover most common access patterns:
| ClusterRole | Permissions |
|---|---|
cluster-admin | Full access to everything. Equivalent to root. |
admin | Full access within a namespace, including RBAC management. Cannot modify quotas or the namespace itself. |
edit | Read/write on most resources within a namespace. Cannot view or modify Roles or RoleBindings. |
view | Read-only access to most resources. Cannot view Secrets (to prevent credential exposure). |
Use these as building blocks by referencing them in RoleBindings:
# Give the "developers" group edit access in the staging namespace
kubectl create rolebinding dev-edit \
--clusterrole=edit \
--group=developers \
--namespace=staging
6. Aggregated ClusterRoles
Aggregated ClusterRoles automatically combine rules from multiple ClusterRoles using label selectors. This is the mechanism the built-in admin, edit, and view roles use to include permissions for Custom Resource Definitions (CRDs).
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-crd-viewer
labels:
# This label causes the rules to be aggregated into the built-in "view" role
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["mycompany.io"]
resources: ["widgets"]
verbs: ["get", "list", "watch"]
When you install this ClusterRole, anyone who already has the view ClusterRole automatically gains read access to widgets -- no additional RoleBindings needed.
7. Testing and Debugging Permissions
kubectl auth can-i
The most important RBAC debugging tool. Test what a user or ServiceAccount can do without logging in as them:
# Can I create pods in the current namespace?
kubectl auth can-i create pods
# Can Alice delete deployments in team-frontend?
kubectl auth can-i delete deployments --as alice --namespace team-frontend
# Can the CI ServiceAccount update deployments?
kubectl auth can-i update deployments \
--as system:serviceaccount:ci-cd:deployment-manager \
--namespace team-frontend
# List all permissions for a user in a namespace
kubectl auth can-i --list --as alice --namespace team-frontend
What RBAC Denial Looks Like
Understanding the error message format helps you diagnose RBAC issues quickly. Here are the most common denial messages:
Example 1: User denied listing Pods in a namespace
Error from server (Forbidden): pods is forbidden: User "alice" cannot list
resource "pods" in API group "" in the namespace "kube-system"
Example 2: ServiceAccount denied a cluster-scope operation
Error from server (Forbidden): nodes is forbidden: User
"system:serviceaccount:ci-cd:deployment-manager" cannot list resource
"nodes" in API group "" at the cluster scope
Anatomy of an RBAC error message:
- Who: The user or ServiceAccount identity (
User "alice"orUser "system:serviceaccount:<ns>:<name>"). - What action: The verb that was denied (
cannot list,cannot create,cannot delete). - Which resource and API group: The resource type and its API group (
resource "pods" in API group ""— the empty string means the core API group,API group "apps"means the apps group). - What scope: Either
in the namespace "X"(namespaced operation) orat the cluster scope(cluster-wide operation).
If you see "at the cluster scope" in the error message, the subject tried to perform a cluster-wide operation but only has a namespaced Role (via RoleBinding). The fix is usually one of:
- Use a ClusterRoleBinding instead of a RoleBinding if cluster-wide access is intended.
- Add a
--namespaceflag to the command if the subject only needs namespace-scoped access.
User Impersonation
Administrators can impersonate users and groups to test their exact experience:
# Run any kubectl command as another user
kubectl get pods --namespace team-backend --as bob --as-group backend-developers
# Check if a group has specific access
kubectl auth can-i create services --as-group frontend-team --namespace team-frontend
Real-World Multi-Team Setup
Here is how a typical organization structures RBAC for three teams sharing one cluster:
- Namespace per team:
team-frontend,team-backend,team-data. - Each team gets
editaccess in their own namespace via RoleBinding to the built-ineditClusterRole. - Cross-team visibility: Each team gets
viewaccess in other team namespaces (optional, depending on trust model). - CI/CD ServiceAccount: A dedicated ServiceAccount in a
ci-cdnamespace with RoleBindings toeditin each team namespace, limited to deployments and configmaps. - SRE team: Gets
adminaccess across all namespaces via ClusterRoleBinding. - ResourceQuotas: Each team namespace has CPU and memory quotas to prevent one team from starving others.
- Network Policies: Restrict cross-namespace traffic to only approved communication paths.
Bootstrapping a New Team Namespace
Here is a complete YAML manifest that creates a namespace and configures RBAC for a new team:
# Namespace
apiVersion: v1
kind: Namespace
metadata:
name: team-payments
labels:
team: payments
---
# Give the payments team edit access
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payments-edit
namespace: team-payments
subjects:
- kind: Group
name: payments-developers # Group from your identity provider
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---
# Give SRE team admin access in this namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sre-admin
namespace: team-payments
subjects:
- kind: Group
name: sre-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
---
# Give other teams read-only access
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cross-team-view
namespace: team-payments
subjects:
- kind: Group
name: all-developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
This pattern can be templatized with Helm or Kustomize and applied via GitOps whenever a new team is onboarded.
Common Pitfalls
1. Using cluster-admin for applications. If your CI/CD pipeline has cluster-admin, a compromised pipeline can delete the entire cluster. Grant only the specific verbs and resources needed.
2. Forgetting that ServiceAccounts are namespaced. ServiceAccount: my-sa in namespace A is a completely different identity from ServiceAccount: my-sa in namespace B. When referencing ServiceAccounts in bindings, always include the namespace.
3. Overly broad wildcard rules. Rules like resources: ["*"] with verbs: ["*"] grant permissions on every current and future resource type. This silently expands as new CRDs are installed.
4. Not auditing RBAC regularly. Permissions accumulate over time. Former team members retain access, old ServiceAccounts remain bound. Use kubectl auth can-i --list to audit periodically.
5. Confusing RoleBinding and ClusterRoleBinding. Using a ClusterRoleBinding when you intended namespace-scoped access gives the subject permissions across every namespace. Double-check your binding type.
6. Granting escalate or bind verbs inadvertently. If a Role includes the escalate verb on roles or the bind verb on rolebindings, the subject can grant themselves any permission -- effectively becoming admin. These verbs should be reserved for cluster administrators only.
7. Not restricting access to Secrets. The built-in edit ClusterRole grants read/write access to Secrets. If a team should not see credentials from other teams, ensure they are isolated in separate namespaces with appropriate RBAC boundaries. Consider using external secrets managers for highly sensitive data.
Best Practices
-
Start with built-in ClusterRoles. Use
view,edit, andadminbefore creating custom roles. They cover most use cases and are maintained by the Kubernetes community. -
One ServiceAccount per application. Do not share ServiceAccounts across unrelated workloads. If one is compromised, the blast radius should be limited.
-
Disable auto-mounted tokens on pods that do not need Kubernetes API access. This eliminates an unnecessary attack vector.
-
Use Groups instead of individual Users in bindings. Manage group membership in your identity provider (OIDC, LDAP, Active Directory) rather than creating individual RoleBindings per user.
-
Audit with
kubectl auth can-i --list. Run this periodically for critical ServiceAccounts to verify they have not accumulated excess permissions. -
Never grant
cluster-adminto applications. If an app needs cluster-wide read access, create a custom ClusterRole with onlyget,list,watchon the specific resources it needs. -
Document your RBAC model. Keep a living document or diagram showing which teams, ServiceAccounts, and automation have access to which namespaces. This is essential for incident response.
-
Use namespaced Roles by default. Only create ClusterRoles when the use case genuinely requires cluster-wide scope (e.g., monitoring, node management). When in doubt, start with a Role scoped to one namespace and expand later if needed.
-
Manage RBAC through GitOps. Store all Role, ClusterRole, and Binding manifests in version control. Apply them through a CI/CD pipeline so that changes are reviewed, auditable, and reversible. Manual
kubectl create rolebindingcommands are convenient but leave no audit trail.
What's Next?
- Ingress (Routing) -- Understand how external traffic reaches your services, and how Ingress controllers use RBAC to watch for resource changes.
- Resources & HPA -- Pair RBAC with ResourceQuotas to enforce both access control and resource limits per namespace.
- Pod Security -- Restrict what pods can do at the kernel level (privileged containers, host networking, etc.) to complement RBAC.
- Security -- Broader security topics including network policies, secrets management, and supply chain security.
- Multi-Tenancy -- Advanced patterns for sharing clusters across teams and organizations.
- Secrets Management -- Secure handling of sensitive data that RBAC alone cannot fully protect.