Granting permissions – RBAC

One of the enormous advantages of Kubernetes is the delegation of permission to perform task that would ordinarily require administrative level access like root on Linux.

Now that we know who our users are, we are able to set up access controls.

Like everything in Kubernetes, authorization is pluggable and supports many different possible implementations. In a future post, we will take a look at a non-native implementation of authorization, but for now, let’s look at the Kubernetes native one.

RBAC

Most of the content of this post is a reiteration of the excellent RBAC documentation available on the Kubernetes web site.

Lets start with what RBAC is: Role Based Access Control. What that means is simply that users (and service accounts) are bound to roles. Roles specify permissions to resources.

Types of role

There are two kinds of role with different scopes; ClusterRole and Role.

A ClusterRole specifies permissions to either namespace- or cluster-scoped resources, such as Pods, and Namespaces. A Role can only specify permission to namespace-scoped resources, such as Pods and Servcies.

Here is an example ClusterRole:

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: cluster-admin
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
 verbs:
  - '*'

And an example Role:

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: beyond
  name: pod-patcher
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - ''
  resources:
  - pods
  verbs:
  - patch

Once Roles have been defined we need to create another resource a RoleBinding for Roles or ClusterRoleBinding for ClusterRoles. This is the mechanism by which we associate ‘bind’ the users to the roles.

Example ClusterRoleBinding:

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-admin
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-admins.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Example RoleBinding:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-patcher
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-patchers.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-patcher
  apiGroup: rbac.authorization.k8s.io

Role aggregations

Role aggregations are a fancy way of including many role definitions into a ‘meta’ role. These allow many users to specify permissions in various roles, then union all of the permissions together in an aggregation. There are two parts to this; one is specifying the aggregation role with a selector, and the other is adding the label(s) to the roles you want

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: beyondthekube-admin
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      beyondthekube.com/rbac-aggregations: "admin"
rules: [] 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: namespace-admin
  labels:
    rbeyondthekube.com/rbac-aggregations: "admin"
rules:
- apiGroups:
  - ""
  resources:
  - "*"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "update"
  - "create"
  - "delete"

Conclusion

RBAC provides an extremely rich permission definition framework that can itself be delegated to other administrators, and augmented via resources provided via helm charts etc.

There are issues with RBAC in terms of multi-cluster administration and the fact that it does not integrate well with services like LDAP and ActiveDirectory that provide groups. In a coming post, I will demonstrate ways to improve that, both with and without standard RBAC.

Identity management Part 2 :: Configuring Keycloak

Back in part 1, we installed Keycloak on top of Kubernetes. Now we want to configure it to generate OIDC tokens based on our (hopefully) existing authentication backend.

In this example, I’m going to use Active Directory, but the setup is similar for and LDAP, and Keycloak also supports most cloud identity providers, plain SAML and so on.

No user identity system?

If you are following along at home, and do not have an existing identity provider like Active Directory, then simply skip the ‘Configuring AD’ section and add a user directly in Keycloak. You can do this by clicking Manage->Users, then ‘Add user’. Make sure you turn ‘Email Verified’ on.

Once the user is created, you can impersonate them and set a password in the Keycloak admin console. See the keycloak documentation for more details.

Now skip over to ‘Configuring OIDC’

Configuring AD

Configuring Keycloak to federate AD users requires logging into the admin console and clicking on ‘User Federation’

User federation options

Select LDAP as your provider.

You need to have your AD setting to hard, and this will differ depending on your setup. Most of these parameters are standard to any AD integration, so you should be able to reference them or ask for them from an AD administrator.

When you have entered all of the required values and tested the connection, you can save this setup.

Configuring OIDC

Add an OIDC client configuration

Start by clicking ‘Create’ to add a new client

Client list

Give your client a name (it can be anything you want.) Click ‘Save’

Force enable verified email

Kubernetes will refuse to work with any identity for which the email is not verified, so we now need to create a mapping to force that. We can do this because we presumably trust that the emails contained within our Active Directory are real.

From the clients list, click the name of your newly-created client. Now click the ‘Mappers’ tab, and finally ‘Create’

Mappers of the client

Enter the values as below, and click ‘Save’

That’s it! In Part 3 we will tie this all back to the kube-apiserver and test authentication.