Identity management Part 1 :: Installing Keycloak

One of the first challenges associated with owning the control plane of Kubernetes is that you are responsible for authz and authn.

While there are some great resources for getting set up with some form of identity management, ultimately the correct solution will depend on your existing on-premise setup. If you already have something like Active Directory or LDAP configured, then it probably makes sense to integrate with those.

Thankfully, Kubernetes provides a rich set of possibilities for authentication; from standards like OIDC right through to just sharing a token to figure out who the current user is.

Objective

In this post, I’ll demonstrate how to setup OIDC authentication with the kube-apiserver, which will allow user identity to be established. If you’re lucky, your enterprise will already have an OIDC provider such as Okta., or the Google IAM services. Even if you’re starting from scratch, it’s relatively simple to set up a bridge to LDAP or AD.

In order to follow along, you’ll need a Kubernetes cluster on which you are able to run kubectl.

The procedure for getting keycloak up and running on top of kubernetes is relatively well documented elsewhere, so I’ll assume you’re able to get that up and running. If you’re using minikube for this, then I would recommend this project.

Setting up access to the service

Once you have a Keycloak pod running, you’ll navigate to the Keycloak end. Given that we have no solution for ingress yet, our best option is to expose the service on a nodeport.

This is achieved by adding some definitions to the existing service definition. The default keycloak service definition will be something like this:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  type: ClusterIP

Nodeports cannot be added to services of type ClusterIP, but we can take advantage of Kubernetes service abstraction to simply create a new service for our nodeport. We will start by grabbing the existing one:

$ kubectl get svc keycloak -o yaml > keycloak-nodeport.yaml

Now we can simply edit it to look like this (notice how similar these services look)

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak-nodeport
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    nodePort: 32080
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  sessionAffinity: None
  type: NodePort

In fact, the diff is tiny:

7c7
<   name: keycloak
---
>   name: keycloak-nodeport
10d9
<   clusterIP: None
13a13
>     nodePort: 32080
20c20
<   type: ClusterIP
---
>   type: NodePort

Accessing the Keycloak service

If you used the minikube and oidckube script, then you should have keycloak available at https://keycloak.devlocal. You should see this page, and be able to click through to the Administration Console.

Keycloak landing page
Keycload admin console login page

The username is ‘keycloak’ and the password is ‘keycloak.’ It is hashed and available in a secret resource:

$  kubectl get secret  keycloak-admin-user -o yaml
 apiVersion: v1
 data:
   password: THIS_IS_THE_PASSWORD_HASH
 kind: Secret
 metadata:
   labels:
     app: keycloak
     component: keycloak
   name: keycloak-admin-user
   namespace: default
 type: Opaque

So we now have Keycloak installed and running. Stick around for Part 2.