Granting permissions – RBAC

One of the enormous advantages of Kubernetes is the delegation of permission to perform task that would ordinarily require administrative level access like root on Linux.

Now that we know who our users are, we are able to set up access controls.

Like everything in Kubernetes, authorization is pluggable and supports many different possible implementations. In a future post, we will take a look at a non-native implementation of authorization, but for now, let’s look at the Kubernetes native one.

RBAC

Most of the content of this post is a reiteration of the excellent RBAC documentation available on the Kubernetes web site.

Lets start with what RBAC is: Role Based Access Control. What that means is simply that users (and service accounts) are bound to roles. Roles specify permissions to resources.

Types of role

There are two kinds of role with different scopes; ClusterRole and Role.

A ClusterRole specifies permissions to either namespace- or cluster-scoped resources, such as Pods, and Namespaces. A Role can only specify permission to namespace-scoped resources, such as Pods and Servcies.

Here is an example ClusterRole:

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: cluster-admin
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
 verbs:
  - '*'

And an example Role:

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: beyond
  name: pod-patcher
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - ''
  resources:
  - pods
  verbs:
  - patch

Once Roles have been defined we need to create another resource a RoleBinding for Roles or ClusterRoleBinding for ClusterRoles. This is the mechanism by which we associate ‘bind’ the users to the roles.

Example ClusterRoleBinding:

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-admin
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-admins.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Example RoleBinding:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-patcher
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-patchers.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-patcher
  apiGroup: rbac.authorization.k8s.io

Role aggregations

Role aggregations are a fancy way of including many role definitions into a ‘meta’ role. These allow many users to specify permissions in various roles, then union all of the permissions together in an aggregation. There are two parts to this; one is specifying the aggregation role with a selector, and the other is adding the label(s) to the roles you want

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: beyondthekube-admin
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      beyondthekube.com/rbac-aggregations: "admin"
rules: [] 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: namespace-admin
  labels:
    rbeyondthekube.com/rbac-aggregations: "admin"
rules:
- apiGroups:
  - ""
  resources:
  - "*"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "update"
  - "create"
  - "delete"

Conclusion

RBAC provides an extremely rich permission definition framework that can itself be delegated to other administrators, and augmented via resources provided via helm charts etc.

There are issues with RBAC in terms of multi-cluster administration and the fact that it does not integrate well with services like LDAP and ActiveDirectory that provide groups. In a coming post, I will demonstrate ways to improve that, both with and without standard RBAC.

Identity management Part 3 :: Setting up OIDC authentication

In part 1 we installed an identity management service; Keycloak. Part 2 showed how to configure Keycloak against AD (or LDAP) with a quickstart option of simply adding a local user.

In this final part we will configure the kube-apiserver to use our identity management (IDM) service – OIDC Kubernetes.

Setting up Kubernetes

The easiest way to configure the kube-apiserver for any auth is to alter the command line arguments it is started with. The method used to do this differs depending on how you have installed kubernetes, and how the service is being started.

It is likely that there is a systemd unit file responsible for starting the kube-apiserver, probably in /usr/lib/system/ or possibly under /etc/systemd/system. Note that if your Kubernetes environment is containerized, as is the case with gravity, then you will first need to enter that environment before hunting for the unit file. In the case of gravity, run sudo gravity enter.

In the case of minikube, Kubernetes runs under Kubernetes defined as static pods. This self-referential startup is something we’ll cover in more detail in a future post, but for now all we really need to understand is how to edit the startup flags passed to kube-apiserver.

Because we cannot make persistent changes to the configuration files minikube copies into the virtual machine, or add certificates, id is easiest to run the oidckube.sh start script. I noticed a small bug in that script, so you will need to make the following changes for this to work:

diff --git a/oidckube.sh b/oidckube.sh
index 19b487c..ba09d58 100755
--- a/oidckube.sh
+++ b/oidckube.sh
@@ -64,7 +64,7 @@ init_minikube() {

 inject_keycloak_certs() {
   tar -c -C "$PKI_DIR" keycloak-ca.pem | ssh -t -q -o StrictHostKeyChecking=no \
-    -i "$(minikube ssh-key)" "docker@$(minikube ip)" 'sudo tar -x --no-same-owner -C /var/lib/localkube/certs'
+    -i "$(minikube ssh-key)" "docker@$(minikube ip)" 'sudo tar -x --no-same-owner -C /var/lib/minikube/certs'

 }

@@ -85,7 +85,7 @@ start_minikube() {
     --extra-config=apiserver.oidc-username-prefix="oidc:" \
     --extra-config=apiserver.oidc-groups-claim=groups \
     --extra-config=apiserver.oidc-groups-prefix="oidc:" \
-    --extra-config=apiserver.oidc-ca-file=/var/lib/localkube/certs/keycloak-ca.pem
+    --extra-config=apiserver.oidc-ca-file=/var/lib/minikube/certs/keycloak-ca.pem
 }

 main() {

You can see in the inject_keycloak_certs function the script adds the keycloak-ca.pem file. In a real production setup, this would probably me a real CA, or the root CA of the organization.

The oidc script also runs minikube with a bunch of arguments to ensure that the Keycloak server is used for authentication:

$ minikube start \
--extra-config=apiserver.oidc-client-id=kubernetes \
--extra-config=apiserver.oidc-username-claim=email \
--extra-config=apiserver.oidc-username-prefix=oidc: \
--extra-config=apiserver.oidc-groups-claim=groups \
--extra-config=apiserver.oidc-groups-prefix=oidc: \
--extra-config=apiserver.oidc-issuer-url=https://keycloak.devlocal/auth/realms/master \
--extra-config=apiserver.oidc-ca-file=/var/lib/localkube/certs/keycloak-ca.pem

The options to the –extraconfig flags mimic the command line flags that should be added to the kube-apiserver binary. In order to mimic this minikube setup in a larger cluster, you need to edit the systemd unit file, or the static pod difinition – depending on how the your cluster starts the apiserver.

--oidc-client-id=kubernetes
--oidc-username-claim=email
--oidc-username-prefix=oidc:
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
--oidc-issuer-url=https://keycloak.devlocal/auth/realms/master
--oidc-ca-file=/path/to/your/rootca.pem # optional if the CA is public

Once you are satisfied that the new arguments are being used by your kube-apiserver process, it’s time to test out the new auth.

Testing it out

With everything configured on the server side, it’s time to make sure things are working on the client end. There are a few different assets that need to be in place in the configuration for kubectl.

The first thing to consider is how to manage updating the kubeconfig.

Some people want to manage one single kubectl configuration with different contexts driving the parameters needed for different clusters. Others like to use the environment variable KUBECONFIG to maintain many different files in different locations. In this post, we’re going to be using the more self-contained approach of KUBECONFIG, but I recommend using contexts in general and will cover that in a future post.

In addition to the new file, I would recommend the use of a tool like kubelogin (go get github.com/int128/kubelogin) to assist with the handoff between keycloak and kubectl. That tool starts local server and opens a browser against it. The user authenticates to keycloak and, if successful, the kubelogin server receives the callback and updates the KUBECONFIG file with the JWT tokens.

You probably have a working configuration for the cluster already – either minikube or some other Kubernetes installation. Either way, you will probably already have a kubeconifg in .kube/config. In order to get a quick start, just copy .kube/conifg to .kube/oidcconfig and remove the client-certificate: and client-key: fields completely.

Once built, kubelogin is available in $GOROOT/bin/kubelogin, but you can move it wherever you want.

Now we have the kubelogin tool available, just setup kubectl to point to the netw conifg and run kubelogin. It will guide you through the process from there:

$ cp ~/.kube/config ~/.kube/oidcconfig
$ kubectl config set-credentials minikube \
   --auth-provider oidc \
   --auth-provider-arg client-id=kubernetes \
   --auth-provider-arg idp-issuer-url=https://keycloak.devlocal/auth/realms/master                       
$ ~/go/bin/kubelogin

When you have logged into keycloak via the web browser, kubelogin will exit and update your kubectl configuration.

---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ca.crt
    server: https://192.168.99.102:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    auth-provider:
      config:
        client-id: kubernetes
        id-token: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCbGtiVXFVSkdRV09fWEFNeXhDUm84Q01iVy1GQ1FzeXVSLTBhUUZiaUJ3In0.eyJqdGkiOiI1NzY0NGNiOC1jOWM3LTQ0MTMtYThiZi0wYTRjM2EyZGIxNTQiLCJleHAiOjE1NDk4NDAxMjAsIm5iZiI6MCwiaWF0IjoxNTQ5ODQwMDYwLCJpc3MiOiJodHRwczovL2tleWNsb2FrLmRldmxvY2FsL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6Imt1YmVybmV0ZXMiLCJzdWIiOiIyY2JkNmIzYi1hMWE4LTQxZGYtYTE3ZC03NTYzYzhiZDE1MGEiLCJ0eXAiOiJJRCIsImF6cCI6Imt1YmVybmV0ZXMiLCJhdXRoX3RpbWUiOjE1NDk4NDAwNTYsInNlc3Npb25fc3RhdGUiOiI2ZjJiM2UwZS1hNzY0LTQxODktYjliNi1kMzBhZTRhMWYzYTQiLCJhY3IiOiIxIiwibmFtZSI6InRlc3QgdXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6InRlc3R1c2VyIiwiZ2l2ZW5fbmFtZSI6InRlc3QiLCJmYW1pbHlfbmFtZSI6InVzZXIiLCJlbWFpbCI6InRlc3R1c2VyQGJleW9uZHRoZWt1YmUuY29tIn0.Y3y0GEPsPHaibRNzp0AQ-pAV-b8K5m9rwGe512QKCHINrMu5jrfe1XCnl5qry6coUPSG_nNwDOB8WxFNqW-lTp_rbZ_auz2xy93L2bs1Pb_3QGwHFRXfAP_AZJagCSp8JC3mpopHRvsnuxi4yR4hqNln85a62jBshK-9QEpgR9mRUcSs2PdOicrPvqP0hMMHzOTYsEcsk-YaGxhPqDQJTHzuCa_8fgx6OG2vycG392Vrr1p5RhUg3lUmTv7nYOHnqkhZQLWCDkcndWt8sBiGVOkyJeDsuy0d8QNLP-9Z-BGip-RiZdmpaA4E2LmKO6CK54eo1i2zRSgN1Odm0316cg
        idp-issuer-url: https://keycloak.devlocal/auth/realms/master
        refresh-token: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCbGtiVXFVSkdRV09fWEFNeXhDUm84Q01iVy1GQ1FzeXVSLTBhUUZiaUJ3In0.eyJqdGkiOiJmMzE5YWFmZC03ZWY5LTQ0OWQtYTljNS1jYjc5YjRhMmRlMzIiLCJleHAiOjE1NDk4NDE4NjAsIm5iZiI6MCwiaWF0IjoxNTQ5ODQwMDYwLCJpc3MiOiJodHRwczovL2tleWNsb2FrLmRldmxvY2FsL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6Imt1YmVybmV0ZXMiLCJzdWIiOiIyY2JkNmIzYi1hMWE4LTQxZGYtYTE3ZC03NTYzYzhiZDE1MGEiLCJ0eXAiOiJSZWZyZXNoIiwiYXpwIjoia3ViZXJuZXRlcyIsImF1dGhfdGltZSI6MCwic2Vzc2lvbl9zdGF0ZSI6IjZmMmIzZTBlLWE3NjQtNDE4OS1iOWI2LWQzMGFlNGExZjNhNCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX19.Fb8_P_UVGDsR9hPYm6pbHbU3AWavS9DLjOxDUdTsEPn2gcLZi0e42GbJCFZMz3sLliTBizxFXGAbaR4c6jS3wO5mZgIIa-ek9OgX1Qo7jsI3w0NegFXdFJHG2HgXr_T8gYcHFkG3FP7j7qi8z52GKfA1T1M_Ki97ovUfLJGu0CxfnCFNpXz3xfIj8tmuV_QZc9_s9jVgXyaQfyq0QYyNbMCtgFG1AZkv70ycUoJ6EtB3R7HOGUPd5MQjtih7GIal8E3U9PS5yp_DWSBig10T-wYKiViEiILxoXO90CF2n-4v8Q3P6YZ6HL-CtjHK4XkHi2jHEJGFqxeeXXBTgVUAUA
      name: oidc

If you take the content of the id_token: field and paste it into http://jwt.io you can decode the token into human-readable form:

{
  "jti": "57644cb8-c9c7-4413-a8bf-0a4c3a2db154",
  "exp": 1549840120,
  "nbf": 0,
  "iat": 1549840060,
  "iss": "https://keycloak.devlocal/auth/realms/master",
  "aud": "kubernetes",
  "sub": "2cbd6b3b-a1a8-41df-a17d-7563c8bd150a",
  "typ": "ID",
  "azp": "kubernetes",
  "auth_time": 1549840056,
  "session_state": "6f2b3e0e-a764-4189-b9b6-d30ae4a1f3a4",
  "acr": "1",
  "name": "test user",
  "preferred_username": "testuser",
  "given_name": "test",
  "family_name": "user",
  "email": "testuser@beyondthekube.com"
}

Finally, now that we have inspected our id_token, we can offer it up to the kube-apiserver to see if it knows who we are:

$  kubectl get pods
Error from server (Forbidden): pods is forbidden: User "oidc:testuser@beyondthekube.com" cannot list resource "pods" in API group "" in the namespace "default"
$

Although the outcome here is unexciting – we’re not actually allowed to interact with the cluster – this does show the user prefixing (oidc:) working, in addition to the kube-apiserver recognizing the username.

Stay tuned for more on how to authorize users to do stuff with the cluster – I’m planning to cove a few different options for that.

Identity management Part 2 :: Configuring Keycloak

Back in part 1, we installed Keycloak on top of Kubernetes. Now we want to configure it to generate OIDC tokens based on our (hopefully) existing authentication backend.

In this example, I’m going to use Active Directory, but the setup is similar for and LDAP, and Keycloak also supports most cloud identity providers, plain SAML and so on.

No user identity system?

If you are following along at home, and do not have an existing identity provider like Active Directory, then simply skip the ‘Configuring AD’ section and add a user directly in Keycloak. You can do this by clicking Manage->Users, then ‘Add user’. Make sure you turn ‘Email Verified’ on.

Once the user is created, you can impersonate them and set a password in the Keycloak admin console. See the keycloak documentation for more details.

Now skip over to ‘Configuring OIDC’

Configuring AD

Configuring Keycloak to federate AD users requires logging into the admin console and clicking on ‘User Federation’

User federation options

Select LDAP as your provider.

You need to have your AD setting to hard, and this will differ depending on your setup. Most of these parameters are standard to any AD integration, so you should be able to reference them or ask for them from an AD administrator.

When you have entered all of the required values and tested the connection, you can save this setup.

Configuring OIDC

Add an OIDC client configuration

Start by clicking ‘Create’ to add a new client

Client list

Give your client a name (it can be anything you want.) Click ‘Save’

Force enable verified email

Kubernetes will refuse to work with any identity for which the email is not verified, so we now need to create a mapping to force that. We can do this because we presumably trust that the emails contained within our Active Directory are real.

From the clients list, click the name of your newly-created client. Now click the ‘Mappers’ tab, and finally ‘Create’

Mappers of the client

Enter the values as below, and click ‘Save’

That’s it! In Part 3 we will tie this all back to the kube-apiserver and test authentication.

Identity management Part 1 :: Installing Keycloak

One of the first challenges associated with owning the control plane of Kubernetes is that you are responsible for authz and authn.

While there are some great resources for getting set up with some form of identity management, ultimately the correct solution will depend on your existing on-premise setup. If you already have something like Active Directory or LDAP configured, then it probably makes sense to integrate with those.

Thankfully, Kubernetes provides a rich set of possibilities for authentication; from standards like OIDC right through to just sharing a token to figure out who the current user is.

Objective

In this post, I’ll demonstrate how to setup OIDC authentication with the kube-apiserver, which will allow user identity to be established. If you’re lucky, your enterprise will already have an OIDC provider such as Okta., or the Google IAM services. Even if you’re starting from scratch, it’s relatively simple to set up a bridge to LDAP or AD.

In order to follow along, you’ll need a Kubernetes cluster on which you are able to run kubectl.

The procedure for getting keycloak up and running on top of kubernetes is relatively well documented elsewhere, so I’ll assume you’re able to get that up and running. If you’re using minikube for this, then I would recommend this project.

Setting up access to the service

Once you have a Keycloak pod running, you’ll navigate to the Keycloak end. Given that we have no solution for ingress yet, our best option is to expose the service on a nodeport.

This is achieved by adding some definitions to the existing service definition. The default keycloak service definition will be something like this:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  type: ClusterIP

Nodeports cannot be added to services of type ClusterIP, but we can take advantage of Kubernetes service abstraction to simply create a new service for our nodeport. We will start by grabbing the existing one:

$ kubectl get svc keycloak -o yaml > keycloak-nodeport.yaml

Now we can simply edit it to look like this (notice how similar these services look)

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak-nodeport
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    nodePort: 32080
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  sessionAffinity: None
  type: NodePort

In fact, the diff is tiny:

7c7
<   name: keycloak
---
>   name: keycloak-nodeport
10d9
<   clusterIP: None
13a13
>     nodePort: 32080
20c20
<   type: ClusterIP
---
>   type: NodePort

Accessing the Keycloak service

If you used the minikube and oidckube script, then you should have keycloak available at https://keycloak.devlocal. You should see this page, and be able to click through to the Administration Console.

Keycloak landing page
Keycload admin console login page

The username is ‘keycloak’ and the password is ‘keycloak.’ It is hashed and available in a secret resource:

$  kubectl get secret  keycloak-admin-user -o yaml
 apiVersion: v1
 data:
   password: THIS_IS_THE_PASSWORD_HASH
 kind: Secret
 metadata:
   labels:
     app: keycloak
     component: keycloak
   name: keycloak-admin-user
   namespace: default
 type: Opaque

So we now have Keycloak installed and running. Stick around for Part 2.