Evaluation :: Rancher

Rancher is a little different to the gravity and kubespray projects that we have looked at previously. Rancher is a self-contained system that attempts to provide an easy way to deploy entire Kubernetes clusters in addition to providing a rich and growing set of click-to-install applications.

Start the rancher service

Rancher is distributed most conveniently as a docker container. Running the service is as simple as:

sudo docker run --name rancher-server -d --restart=unless-stopped -p 8080:8080 rancher/server:stable

After that, you can navigate to your host on port :8080 (or whatever host port you map to.) You’ll be greeted by a web page like this:

Adding nodes

Before Rancher can deploy anything, you need to run a join command on each of the nodes that you want to add to the rancher ecosystem.

  • Navigate to ‘INFRASTRUCTURE->Hosts’
  • Click the ‘Add Host’ button
  • If required, enter the real HTTP endpoint into the web form when prompted.

Run the join command on the nodes

sudo docker run -e CATTLE_AGENT_IP="playground2"  -e CATTLE_HOST_LABELS='environment=playground'  --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.11 http://playground1:8080/v1/scripts/28912A58DD5EEAACB1DF:1546214400000:oX3k2gnmM7kzcfmZdlgB8cwrapU

Once you have joined your nodes to the running Rancher system, the noeds will show up in the Hosts page.

Installing stacks

Now that we have nodes added to the cluster we can pick a stack to deploy (or create our own.) Rancher has a pretty rich catalog of pre-built stacks.

As part of the evaluation I selected the Vault stack. It failed to install on my lab setup, but it could well be something particular to my setup that caused the failure.


WhiKle stacks are being created Rancher provides very helpful immediate feedback in the webui, but what is going on at a deeper level is pretty difficult to get to.

Kubernetes!

Rancher comes with a Kubernetes implementation in the stack catalog. As of this writing it’s at version 1.12.7, so not bleeding-edge, but not ancient either.

I worked through quite a few failures after deploying the Kubernetes stack – most of which I resolved by adding a 4th node – making 3 cluster nodes and allowing quorum of etcd. After some additional manual restarting of services:

The resulting stack will look a little unfamiliar to those familiar with a base Kubernetes installation. Somehow the nodes and system services seem to be Rancher-managed and hidden away from the Kubernetes API.

This seems like an unfortunate decision given the push toward static pods and running ‘k8s-on-k8s.’ It seems like the community is moving in the direction of integration with the same scheduling and resource management tools for the control plane as the application plane.

Even with the stack fully running, Rancher provides tantalizing links that do not seem to work. This Kubernetes dashboard link was broken for me throughout the evaluation. I noted that here was no ‘Kubernetes Dashboard’ service in the stack, but I did not try deploying a helm of the dashboard. The link was there, and should have worked out of the box.

Conclusion

Rancher is an ambitious tool that aims to remove the complexity of deploying common stacks, and further ‘containerize’ full systems – think docker for docker-compose or Kubernetes deployments. In many ways, Rancher is attempting to achieve the same automation as Gravity. In so many other ways, it is building a completely new and unique ecosystem of turnkey deployments.

That automation and turnkey approach has two sides; in much of the evaluation, I found myself at a loss when trying to figure out why stacks were not working. There is very little direction leading you under the hood to enable admins to work through issues.

Adding Kubernetes to Rancher improves the situation somewhat, but I could not help but think “What is Rancher actually offering me here?” I did not get into working out if it is possible to switch out CNI providers, maybe add multus, or simply add something like local-storage-provider or StorageOS

My Lab Setup

In order to perform some of the evaluations I write about on this blog, I have created a pretty automated setup around Virtualbox. This allows me to work pretty autonomously with my laptop with everything running locally. That’s pretty good for planes and trains, where accessing the cloud in a reliable way can be challenging.

Although this setup is not particularly tied to Kubernetes in any way, it does offer an automated Virtualbox setup for creating linked clones and setting hostnames easily. My particular setup is based around RHEL/CentOS, this setup could be pretty easily adapted to any other flavor of Linux too.

Overview of the process

There are two main parts to this; the first is a script to drive VirtualBox to create snapshots and linked clones from some base image, and to set properties that can be referenced from the guest OS.

The second part is a script run at system start that simply compares the current guest hostname to the Virtualbox property. If they do no match, then the hostname is updated and the system rebooted.

Creating the base VM

This is as simple as creating a normal guest VM and installing the OS on here. This is well documented elsewhere, so I’m going to assume this is done already. I named my base VM ‘playground-base’.

In case it’s not obvious, anything you set up in ‘playground-base’ will be available to all of the resulting linked clone VMs. I help myself out by adding my ssh key to the authorized_keys file of the ‘beyond’ user. I also create a key for that user and add it too. That allows my to ssh from my user on the host machine to beyond@playground*. I can also ssh between the playground VMs as the beyoud user.

I make sure that beyond can sudo NOPASSWD: as root. For the purpose of evaluating system level installations like Kubernetes, that makes things much easier.

Lastly, this whole procedure requires the Virtualbox Guest tools to be installed, so it’s a good idea to make sure that is done on the base image as well.

Add framework for the hostname update

For CentOS7 we are going to add a system unit that runs our script. This procedure should work for other flavors of Linux, but the file you need to update may change in that case

All of the scripts used here are available on github.

Create a file with the script somewhere you’ll remember. I simply added mine under /root

[beyond@playground-base ~]$ sudo cat /root/set-hostname
#!/bin/bash

required_hostname=$(VBoxControl guestproperty get /Startup/Hostname 2>&1 | grep "^Value" | sed 's/^Value: //')

current_hostname=$(hostname)

if [[ "X$required_hostname" != "X$current_hostname" ]]; then
    if [[ "X$required_hostname" == "X" ]]; then
        echo "No hostname specified in /Startup/Hostname property. Please set it"
        sleep 600
    else
      echo "Hostname is incorrectly set. Setting to $required_hostname"
      echo "$required_hostname" > /etc/hostname
      sync
    fi
    reboot
fi

[beyond@playground-base ~]$ 

Now setup the unit file and enable the service:

[beyond@playground-base ~]$ cat >/etc/systemd/system/set-hostname.service <<EOF 
[Unit]
Description=Set hostnmae from Virtualbox property
After=network.target

[Service]
Type=simple
ExecStart=/root/set-hostname
TimeoutStartSec=0

[Install]
WantedBy=default.target
EOF
[root@playground-base beyond]# systemctl daemon-reload
[root@playground-base beyond]# systemctl list-unit-files | grep set-hostname
set-hostname.service                          disabled
[root@playground-base beyond]# systemctl enable set-hostname
Created symlink from /etc/systemd/system/default.target.wants/set-hostname.service to /etc/systemd/system/set-hostname.service.
[root@playground-base beyond]# 

Before you actually reboot the machine, it is important to actually set the hostname property on the playground-base VM, otherwise it will sit sleeping and rebooting until that is done.

$ vboxmanage guestproperty set playground-base /Startup/Hostname playground-base

Now you can reboot the playground-base host and it should come back as normal.

Orchestrate the lab creation

The following simple bash script is what I use to create and destroy the ‘lab’ VMs. It starts by snapshotting the base and creating linked clones. It auto-starts the clones, which then change their hostnames and reboot.

#!/bin/bash
#
# Automatically create a test VM setup from a base image
# Creates linked clones to save space and anticipates
# that the hosts will set their hostname from the vbox
# property /Startup/Hostname
#
# See https://beyondthekube.com/my-lab-setup/ for details
#
# Known issues:
#    - the sort -r for the delete of snapshots is a lttle
#      lame. Works for <10 VMs, which suits my use-case
#

ACTION=$1
# Feel free to run as BASE_VM=my-better-base ./playground ..
BASE_VM="${BASE_VM:-playground-base}"

function usage() {
  echo "Usage:"
  echo "$(basename $0) (create|list|delete) [..options..]"
  echo "              create <number> [prefix]"
  echo "              list [prefix]"
  echo "              delete [prefix]"
  exit 1
}

function snap_exists() {
  # Return 0/1 if a VM snapshot of name $snap exists
  local vm=$1
  local snap=$2
  vboxmanage snapshot ${vm} list 2>&amp;1| grep "Name: ${snap} (" 2>&amp;1
  return $?
}

function vm_running() {
  # Return 0/1 if a running vm matching $vm exists.
  # exact math only unless $prefix is set true
  local vm=$1
  local prefix=$2
  if [[ -z prefix ]]; then
    vboxmanage list runningvms 2>&amp;1 | grep "^\"${vm}\"" >/dev/null 2>&amp;1
  else
    vboxmanage list runningvms 2>&amp;1 | grep "^\"${vm}" >/dev/null 2>&amp;1
  fi
  return $?
}

function list_vms() {
  # list all VMS with prefix $prefix
  local user_prefix=$1
  local prefix="${user_prefix:-playground}"
  # always exclude the $BASE_VM from lists
  vboxmanage list vms | grep "^\"${prefix}" | grep -v "^\"${BASE_VM}\"" | awk '{print $1}' |  sed 's/"//g' 
}

function create_vms() {
  # Create $num VMs from the $BASE_VM as linked clones
  local num=$1
  local user_prefix=$2
  local prefix="${user_prefix:-playground}"
  local clone
  local snap
  if vm_running $BASE_VM; then
    echo "Cloning the base vm ${BASE_VM} requires it to be stopped. Please do that first"
  fi
  for i in $(seq 1 $num); do
    clone="${prefix}${i}"
    snap="${clone}-base"
    if snap_exists ${BASE_VM} ${snap}; then
      echo "Reusing existnig snapshot ${BASE_VM}::${snap}"
    else
      vboxmanage snapshot ${BASE_VM} take ${snap} --description "base snapshot for clone ${clone}"
    fi
    vboxmanage clonevm ${BASE_VM} --name ${clone} --snapshot ${snap} --options link --register
    vboxmanage guestproperty set ${clone} /Startup/Hostname "$clone"
    vboxmanage startvm ${clone}
  done
}

function destroy_vms() {
  # Delete VMs patching $prefix and associated snapshots
  local prefix=$1
  local snap
  local snap_uuid
  local gone
  for vm in $(list_vms $prefix | sort -r); do
    vboxmanage controlvm ${vm} poweroff
    gone=1
    while [[ $gone != 0 ]]; do  
        vboxmanage unregistervm ${vm} --delete >/dev/null 2>&amp;1
        gone=$?
    done
    snap="${vm}-base"
    snap_uuid=$(snap_exists ${BASE_VM} ${snap} | sed 's/^.*UUID: \(.*\)).*/\1/')
    while [[ ! -z ${snap_uuid} ]]; do
      vboxmanage snapshot ${BASE_VM} delete ${snap_uuid}
      sleep 1
      snap_uuid=$(snap_exists ${BASE_VM} ${snap} | sed 's/^.*UUID: \(.*\)).*/\1/')
    done
  done
}

# Poor-man's argparsing
case "${ACTION}" in
  "create")
    shift
    num=$1; shift
    prefix=$1; shift
    if [[ -z ${num} ]]; then
        usage
    fi
    create_vms ${num} ${prefix}
    ;;
  "list")
    shift
    prefix=$1; shift
    list_vms ${prefix}
    ;;
  "delete")
    shift
    prefix=$1; shift
    destroy_vms ${prefix}
    ;;
  *)
    usage
    ;;
esac

An example session

The following shows a simple setup and teardown of a ‘lab.’ Note that in this example, the DNS setup is provided by my pfsense gateway which gets the hostnames from their boot-time DHCP query. An alternative setup would me to use sudo in the script to manage hosts, or write a function to lookup the hostname->ip using vbox manage and implement something like playground connect <hostname>'

 ~  $  playground create 3
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Snapshot taken. UUID: 7a641e8a-4458-4bb9-915d-02ec9ec4c20b
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "playground1"
Waiting for VM "playground1" to power on...
VM "playground1" has been successfully started.
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Snapshot taken. UUID: 491d5021-49c2-4fe3-a0c9-c0926b8a2cc6
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "playground2"
Waiting for VM "playground2" to power on...
VM "playground2" has been successfully started.
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Snapshot taken. UUID: 7592de93-1193-4426-a425-aa592736ad60
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "playground3"
Waiting for VM "playground3" to power on...
VM "playground3" has been successfully started.
 ~  $  

I wait for those hosts to boot. I tend to run the VMs normally, as opposed to headless, so I can see this happen. This also allows me to recover the output of kernel panics, or otherwise debug more easily.

Now the lab is up and running and I can interact with any of the hosts as normal:

 ~  $  for i in 1 2 3; do ssh -o StrictHostKeyChecking=no beyond@playground$i uptime; done
Warning: Permanently added 'playground1,192.168.1.166' (ECDSA) to the list of known hosts.
 16:13:20 up 2 min,  0 users,  load average: 0.10, 0.14, 0.06
Warning: Permanently added 'playground2,192.168.1.167' (ECDSA) to the list of known hosts.
 16:13:20 up 2 min,  0 users,  load average: 0.01, 0.01, 0.01
Warning: Permanently added 'playground3,192.168.1.168' (ECDSA) to the list of known hosts.
 16:13:20 up 2 min,  0 users,  load average: 0.10, 0.14, 0.06
 ~  $  

Tearing it down

When done, I can easily tear down the whole experiment:

 ~  $  playground delete
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Deleting snapshot 'playground3-base' (7592de93-1193-4426-a425-aa592736ad60)
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Deleting snapshot 'playground2-base' (491d5021-49c2-4fe3-a0c9-c0926b8a2cc6)
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Deleting snapshot 'playground1-base' (7a641e8a-4458-4bb9-915d-02ec9ec4c20b)
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
 ~  $  

Additional features

The playground script supports an optional prefix argument, so it’s possible to run a different set of VMs with other prefixes. This can be useful if you are running a number of different labs on the same host.

You can also list the hosts associated with the lab by issuing playground list <optional prefix>

Conclusion

For the short time it took to figure this out, I believe this is a pretty interesting automation. It takes about 5-10 minutes to go through these steps manually via the GUI, and has a lot of easily typo’d steps.

Using these scripts it’s possible to build a fully working lab in < 1m, and tear it down even faster. I look forward to seeing if this actually changes my workflow in terms of evaluating different setups concurrently.

Granting permissions – RBAC

One of the enormous advantages of Kubernetes is the delegation of permission to perform task that would ordinarily require administrative level access like root on Linux.

Now that we know who our users are, we are able to set up access controls.

Like everything in Kubernetes, authorization is pluggable and supports many different possible implementations. In a future post, we will take a look at a non-native implementation of authorization, but for now, let’s look at the Kubernetes native one.

RBAC

Most of the content of this post is a reiteration of the excellent RBAC documentation available on the Kubernetes web site.

Lets start with what RBAC is: Role Based Access Control. What that means is simply that users (and service accounts) are bound to roles. Roles specify permissions to resources.

Types of role

There are two kinds of role with different scopes; ClusterRole and Role.

A ClusterRole specifies permissions to either namespace- or cluster-scoped resources, such as Pods, and Namespaces. A Role can only specify permission to namespace-scoped resources, such as Pods and Servcies.

Here is an example ClusterRole:

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: cluster-admin
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
 verbs:
  - '*'

And an example Role:

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: beyond
  name: pod-patcher
  labels:
    beyondthekube.com/bootstrapping: rbac-defaults
rules:
- apiGroups:
  - ''
  resources:
  - pods
  verbs:
  - patch

Once Roles have been defined we need to create another resource a RoleBinding for Roles or ClusterRoleBinding for ClusterRoles. This is the mechanism by which we associate ‘bind’ the users to the roles.

Example ClusterRoleBinding:

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-admin
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-admins.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Example RoleBinding:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-patcher
subjects:
- kind: User
  name: bob@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: User
  name: anna@beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: those-other-patchers.beyondthekube.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-patcher
  apiGroup: rbac.authorization.k8s.io

Role aggregations

Role aggregations are a fancy way of including many role definitions into a ‘meta’ role. These allow many users to specify permissions in various roles, then union all of the permissions together in an aggregation. There are two parts to this; one is specifying the aggregation role with a selector, and the other is adding the label(s) to the roles you want

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: beyondthekube-admin
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      beyondthekube.com/rbac-aggregations: "admin"
rules: [] 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: namespace-admin
  labels:
    rbeyondthekube.com/rbac-aggregations: "admin"
rules:
- apiGroups:
  - ""
  resources:
  - "*"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "update"
  - "create"
  - "delete"

Conclusion

RBAC provides an extremely rich permission definition framework that can itself be delegated to other administrators, and augmented via resources provided via helm charts etc.

There are issues with RBAC in terms of multi-cluster administration and the fact that it does not integrate well with services like LDAP and ActiveDirectory that provide groups. In a coming post, I will demonstrate ways to improve that, both with and without standard RBAC.

Evaluation :: kubespray

Introduction

Kubespray is different to the recently evaluated Gravity deploy. It uses ansible to drive deployment on the base system, rather than attempting to package all of the kubernetes components into a container

If it is not obvious, Kubespray makes extensive use of connectivity to the internet in order to download and configure a lot of the required assets, and command line tools. This is not always workable in an on-premis environment.

Installation

In order to get started, clone the official kubespray repo:

$ git clone https://github.com/kubernetes-sigs/kubespray.git
Cloning into 'kubespray'...
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 33233 (delta 1), reused 2 (delta 1), pack-reused 33228
Receiving objects: 100% (33233/33233), 9.67 MiB | 14.10 MiB/s, done.
Resolving deltas: 100% (18561/18561), done.
$

You will need to edit the inventory file to reflect the hosts that you wish to install the Kubernetes cluster on. If this is a large production installation, you may want to take a look at dynamic inventories for Ansible.

In my case, I’m planning to install the cluster on three nodes named playground[1-3] so I’ve edited the inventory file to reflect that:

Creating the inventory

$ cat inventory/local/hosts.ini 
playground1 ansible_connection=local local_release_dir={{ansible_env.HOME}}/releases

[kube-master]
playground1

[etcd]
playground1

[kube-node]
playground1
playground2
playground3

[k8s-cluster:children]
kube-node
kube-master
$

Installing the kubespray requirements

Next, make sure that you have all of the requirements required:

$ sudo pip install -r requirements.txt

Customization

You may want to review and possibly change some of the variables in the group_vars directories.

$ find inventory/local/group_vars/
inventory/local/group_vars/
inventory/local/group_vars/all
inventory/local/group_vars/all/all.yml
inventory/local/group_vars/all/azure.yml
inventory/local/group_vars/all/coreos.yml
inventory/local/group_vars/all/docker.yml
inventory/local/group_vars/all/oci.yml
inventory/local/group_vars/all/openstack.yml
inventory/local/group_vars/etcd.yml
inventory/local/group_vars/k8s-cluster
inventory/local/group_vars/k8s-cluster/addons.yml
inventory/local/group_vars/k8s-cluster/k8s-cluster.yml
inventory/local/group_vars/k8s-cluster/k8s-net-calico.yml
inventory/local/group_vars/k8s-cluster/k8s-net-canal.yml
inventory/local/group_vars/k8s-cluster/k8s-net-cilium.yml
inventory/local/group_vars/k8s-cluster/k8s-net-contiv.yml
inventory/local/group_vars/k8s-cluster/k8s-net-flannel.yml
inventory/local/group_vars/k8s-cluster/k8s-net-kube-router.yml
inventory/local/group_vars/k8s-cluster/k8s-net-weave.yml
$ 

If you have not used Ansible before, it works over ssh and requires the user running it to have passwordless ssh access to all of the nodes. In the case of kubespray, this user is root, so you need to have a strategy to make that work — at least for installation time.

Running the install

$ sudo bash
# ansible-playbook -i inventory/local/hosts.ini cluster.yml 

After running the installation, a slow 10 minutes on my test setup, we get to the end of the Ansible run:


PLAY RECAP *********************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0   
playground1                : ok=404  changed=126  unreachable=0    failed=0   
playground2                : ok=253  changed=76   unreachable=0    failed=0   
playground3                : ok=254  changed=76   unreachable=0    failed=0   

Monday 25 February 2019  18:36:02 -0600 (0:00:00.027)       0:10:49.135 ******* 
=============================================================================== 
download : container_download | download images for kubeadm config images ---------------------------------------------------------------------- 39.74s
download : file_download | Download item ------------------------------------------------------------------------------------------------------- 38.43s
kubernetes/master : kubeadm | Initialize first master ------------------------------------------------------------------------------------------ 28.25s
kubernetes/node : install | Copy hyperkube binary from download dir ---------------------------------------------------------------------------- 27.08s
container-engine/docker : ensure docker packages are installed --------------------------------------------------------------------------------- 25.38s
download : file_download | Download item ------------------------------------------------------------------------------------------------------- 19.69s
kubernetes/kubeadm : Join to cluster ----------------------------------------------------------------------------------------------------------- 16.65s
kubernetes/preinstall : Update package management cache (YUM) ---------------------------------------------------------------------------------- 16.56s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------- 11.67s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------- 11.57s
container-engine/docker : Docker | pause while Docker restarts --------------------------------------------------------------------------------- 10.08s
kubernetes/kubeadm : Restart all kube-proxy pods to ensure that they load the new configmap ----------------------------------------------------- 9.44s
kubernetes/node : install | Copy kubelet binary from download dir ------------------------------------------------------------------------------- 9.20s
etcd : Configure | Check if etcd cluster is healthy --------------------------------------------------------------------------------------------- 7.34s
container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ------------------------------------------------------------- 7.20s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 7.18s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 6.75s
kubernetes/preinstall : Install packages requirements ------------------------------------------------------------------------------------------- 6.58s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 6.47s
kubernetes/node : install | Write kubelet systemd init file ------------------------------------------------------------------------------------- 6.44s

Playing with the cluster

# kubectl get nodes
NAME          STATUS   ROLES         AGE     VERSION
playground1   Ready    master,node   4m9s    v1.13.3
playground2   Ready    node          3m43s   v1.13.3
playground3   Ready    node          3m43s   v1.13.3
# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-8594b5df7b-26gn4   1/1     Running   0          3m18s
calico-node-424g9                          1/1     Running   0          3m25s
calico-node-8wpfc                          1/1     Running   0          3m25s
calico-node-f4jbh                          1/1     Running   0          3m25s
coredns-6fd7dbf94c-9wtjj                   1/1     Running   0          2m40s
coredns-6fd7dbf94c-ldvcj                   1/1     Running   0          2m44s
dns-autoscaler-5b4847c446-m2df7            1/1     Running   0          2m41s
kube-apiserver-playground1                 1/1     Running   0          4m40s
kube-controller-manager-playground1        1/1     Running   0          4m40s
kube-proxy-ldntn                           1/1     Running   0          3m10s
kube-proxy-skvcq                           1/1     Running   0          3m1s
kube-proxy-tj2bk                           1/1     Running   0          3m21s
kube-scheduler-playground1                 1/1     Running   0          4m40s
kubernetes-dashboard-8457c55f89-pjn9g      1/1     Running   0          2m39s
nginx-proxy-playground2                    1/1     Running   0          4m27s
nginx-proxy-playground3                    1/1     Running   0          4m27s
# 

Kubespray helpfully creates and populates a Kubeconfig file with embedded certs for client identity. This is useful, but should probably be thought of as ‘the root user’ of the Kubernetes cluster.

In order to grant additional access to the cluster, now would be a good time to look into identity management and role based access control

Adding additional nodes

Adding a node is done by adding to the inventory and running ansible-playbook with the -l option to limit the install to the new nodes. All of the certificate handling is performed under the hood by kubespray.

Conclusion

If you are familiar with Ansible, then Kubespray is a great way to go; it encapsulates all of the complexity of orchestrating a cluster into a single command to run.

There are downsides: Kubespray is hiding a lot of complexity here, and ultimately it is useful to understand in detail what is going on. As stated earlier in the post, it’s also an absolute necessity to have relatively wide-open access to the internet for Kubespray to work. Of course, this is plain old Ansible, so it is possible to alter Kubespray to work with in-house repos if needed.

I found Kubespray surprisingly easy to get along with, and would recommend it for those wanting to get up and running on bare metal if internet access is not an issue for you.

Evaluation :: Gravity

Gravity is a product from gravitational that is designed to solve the problem of packaging and deploying a multi-node Kubernetes application. It has native support for the big cloud providers.

One of the advantages of Gravity is the opscenter, which is a registry on which you can publish your applications. In theory, if hardware or cloud resources are available, you can single-click instantiate a whole cluster running the intended application.

One of the huge advantages of Gravity is that it bundles everything together in one tarball, so you do not need any internet connectivity in order to deploy your cluster and application. This makes it well suited for environments like DMZ that have little to no internet access.

In this post, I’m going to slightly abuse the use-case for Gravity, and use it as a pre-packaged system to deploy a base Kubernetes cluster

Creating the ‘application’

There is some basic documentation on how to build Gravity applications on the Gravitational site. The basic idea is that you define a gravity application ‘bundle’ which will produce a tarball of kubernetes in addition to embedding any docker container images and your installation resources.

The first step is to create a set of manifests that define our application. When done, our directory will look like this:

 $ find .
.
./app.yaml
./install-hook.yaml
./update-hook.yaml
./uninst-hook.yaml

Lets take a look at each file one-by-one

app.yaml

---
apiVersion: bundle.gravitational.io/v2
kind: Bundle
metadata:
  name: beyondthekube
  resourceVersion: 0.0.1
  description: This is a test app for gravity
  author: beyondthekube.com

installer:
  flavors:
    default: one
    items:
    - name: one
      description: Single node installation
      nodes:
      - profile: node
        count: 1
    - name: three
      description: Three node cluster
      nodes:
      - profile: node
        count: 3

nodeProfiles:
- name: node
  description: worker node

hooks:
  install:
    job: file://install-hook.yaml
  update:
    job: file://update-hook.yaml
  uninstall:
    job: file://uninst-hook.yaml

systemOptions:
  docker:
    storageDriver: overlay2
    args:
    - --log-level=DEBUG

install-hook.yaml

apiversion: batch/v1
kind: Job
metadata:
  name: install-hook
spec:
  template:
    metadata:
      name: install-hook
    spec:
      restartPolicy: OnFailure
      containers:
      - name: debian-tall
        image: quay.io/gravitational/debian-tall:0.0.1
        command:
        - /usr/local/bin/kubectl
        - apply
        - -f
        - /var/lib/gravity/resources/myapp.yaml

Performing the build

 $ ../tele build app.yaml 
* [1/6] Selecting application runtime
	Will use latest runtime version 5.4.6
* [2/6] Downloading dependencies from s3://hub.gravitational.io
	Still downloading dependencies from s3://hub.gravitational.io (10 seconds elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (20 seconds elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (30 seconds elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (40 seconds elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (50 seconds elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (1 minute elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (2 minutes elapsed)
	Still downloading dependencies from s3://hub.gravitational.io (2 minutes elapsed)
* [3/6] Embedding application container images
	Detected application manifest app.yaml
	Detected resource file install-hook.yaml
	Detected resource file uninst-hook.yaml
	Detected resource file update-hook.yaml
	Using local image quay.io/gravitational/debian-tall:0.0.1
	Using local image quay.io/gravitational/debian-tall:0.0.1
	Vendored image gravitational/debian-tall:0.0.1
* [4/6] Using runtime version 5.4.6
* [5/6] Generating the cluster snapshot
	Still generating the cluster snapshot (10 seconds elapsed)
	Still generating the cluster snapshot (20 seconds elapsed)
* [6/6] Saving the snapshot as beyondthekube-0.0.1.tar
	Still saving the snapshot as beyondthekube-0.0.1.tar (10 seconds elapsed)
	Still saving the snapshot as beyondthekube-0.0.1.tar (20 seconds elapsed)
* [6/6] Build completed in 2 minutes 
 $ 

The build produces a tarball as indicated in the output. If we look inside, we can see that the tar includes bundled scripts, binaries and packaged blobs – these are the container images.

 $ tar tvf beyondthekube-0.0.1.tar 
-rwxr-xr-x 1000/1000  64053744 2019-02-14 21:36 gravity
-rw-r--r-- 1000/1000      5364 2019-02-14 21:36 app.yaml
-rwxr-xr-x 1000/1000       907 2019-02-14 21:36 install
-rwxr-xr-x 1000/1000       411 2019-02-14 21:36 upload
-rwxr-xr-x 1000/1000       344 2019-02-14 21:36 upgrade
-rw-r--r-- 1000/1000      1086 2019-02-14 21:36 README
-rw------- beyond/beyond 262144 2019-02-14 21:36 gravity.db
drwxr-xr-x beyond/beyond      0 2019-02-14 21:35 packages
drwxr-xr-x beyond/beyond      0 2019-02-14 21:36 packages/blobs
drwxr-xr-x beyond/beyond      0 2019-02-14 21:35 packages/blobs/128
-rw------- beyond/beyond 443247654 2019-02-14 21:35 packages/blobs/128/128cb957bf304b8ac62f7404dd80b2d9353b7a8b8b94c3d1aefce54d0b749752
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/19d
-rw------- beyond/beyond 151150973 2019-02-14 21:36 packages/blobs/19d/19df5d94b336fd5d59d5957122ad3b75f6bb550281593dd30a9ffc2cd4a51984
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/blobs/1eb
-rw------- beyond/beyond  64053744 2019-02-14 21:35 packages/blobs/1eb/1eb29eaf77d0cf883b9636e7a92c4466bb476ade645821eb7df6a6aff7f62dac
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/blobs/21a
-rw------- beyond/beyond   5131031 2019-02-14 21:35 packages/blobs/21a/21a2a700c454ed032ddca4c581480e384d172b6c3ad8e592be2a7abb6a07ba69
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/2f8
-rw------- beyond/beyond  67647165 2019-02-14 21:36 packages/blobs/2f8/2f8cb1d2724d9d68f684b9a3d552119005e745e343f12b5528996557da9af8a9
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/blobs/6ad
-rw------- beyond/beyond  56612218 2019-02-14 21:35 packages/blobs/6ad/6adb28689c4c87153c09526123991aa7a241b3aa4ee0677b14c95ff1f5853d9b
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/7fa
-rw------- beyond/beyond  23333959 2019-02-14 21:36 packages/blobs/7fa/7fa881d2c9d847e44638354ce3bed1585ac4cc14da0f0a831a8160f35b40e98a
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/874
-rw------- beyond/beyond 305392980 2019-02-14 21:36 packages/blobs/874/8740c910a5040fe9f018c88d6c773e5e3eaf7041be495a52fc2aaa19eeb2ba79
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/9bc
-rw------- beyond/beyond   5131049 2019-02-14 21:36 packages/blobs/9bc/9bcd1569b28fab6e7e4faa3e1add67e10ed152583de537de3ca6c0e397e380fe
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/a6a
-rw------- beyond/beyond   5130375 2019-02-14 21:36 packages/blobs/a6a/a6a499fec0dcee59da5e50595605087b37ab39ced9888d91344da63ea77927cc
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/blobs/beb
-rw------- beyond/beyond  16082373 2019-02-14 21:35 packages/blobs/beb/beb9f38e50c05cfb45db81f898b927554a3f3aa46df22c5d0134c3bbef414bf7
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/blobs/d40
-rw------- beyond/beyond  74265843 2019-02-14 21:36 packages/blobs/d40/d401e2bd53fc7d08bd22745d45ba8004199150c1af0ed38c5b53cd0cd0cb3289
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/blobs/f42
-rw------- beyond/beyond   1258660 2019-02-14 21:35 packages/blobs/f42/f4262bdd8f893444ce43321e753dd8c198ba25974a82e4ed3b722cc2ce08a666
drwxr-xr-x beyond/beyond         0 2019-02-14 21:36 packages/tmp
drwxr-xr-x beyond/beyond         0 2019-02-14 21:35 packages/unpacked
 $ 

Installation

Potential Errors

If you get errors running the install, your underlying OS installation is probably failing the preflight checks. The installer is pretty good about letting you know what is required in the error message:

$ sudo ./gravity install --flavor=one
Sat Feb 16 23:17:23 UTC	Starting installer
Sat Feb 16 23:17:23 UTC	Preparing for installation...
Sat Feb 16 23:18:08 UTC	Installing application beyondthekube:0.0.1
Sat Feb 16 23:18:08 UTC	Starting non-interactive install
Sat Feb 16 17:18:08 UTC	Auto-loaded kernel module: overlay
Sat Feb 16 17:18:08 UTC	Auto-set kernel parameter: net.ipv4.ip_forward=1
[ERROR]: The following pre-flight checks failed:
	[×] open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory (br_netfilter module is either not loaded, or sysctl net.bridge.bridge-nf-call-iptables is not set, see https://www.gravitational.com/docs/faq/#bridge-driver)

One issue with the gravity installer is in an environment where a directory service is the authoritative source of user logins. By default the gravity installer wants to perform a useradd if the ‘planet’ user and group are not found in their respective files. The easiest way to work around this (if you have pam_extrausers) is to add the users to the respective extrausers files:

$ sudo mkdir -p /var/lib/extrausers
$ sudo getent passwd planet > /var/lib/extrausers/passwd
$ sudo getent group planet > /var/lib/extrausers/group

This is the case even if you change the user and group used to run gravity by passing the --service-uid and --service-gid flags.

Types of installation

When we created the application using the app.yaml manifest above, we defined two flavors; “one” and “three” which specify single node and 3 node installations respectively. Let’s start with the simple, single node installation:

Single node

 $ mkdir ../inst
 $ cd ../inst
 $ tar xf ../app/beyondthekube-0.0.1.tar 
 $ sudo ./gravity install --flavor=one
Mon Feb 18 04:14:16 UTC	Starting installer
Mon Feb 18 04:14:16 UTC	Preparing for installation...
Mon Feb 18 04:14:46 UTC	Installing application beyondthekube:0.0.1
Mon Feb 18 04:14:46 UTC	Starting non-interactive install
Mon Feb 18 04:14:46 UTC	Still waiting for 1 nodes of role "node"
Mon Feb 18 04:14:47 UTC	All agents have connected!
Mon Feb 18 04:14:48 UTC	Starting the installation
Mon Feb 18 04:14:48 UTC	Operation has been created
Mon Feb 18 04:14:49 UTC	Execute preflight checks
Mon Feb 18 04:14:56 UTC	Configure packages for all nodes
Mon Feb 18 04:15:03 UTC	Bootstrap all nodes
Mon Feb 18 04:15:04 UTC	Bootstrap master node playground1
Mon Feb 18 04:15:11 UTC	Pull packages on master node playground1
Mon Feb 18 04:16:15 UTC	Install system software on master node playground1
Mon Feb 18 04:16:16 UTC	Install system package teleport:2.4.7 on master node playground1
Mon Feb 18 04:16:17 UTC	Install system package planet:5.4.7-11302 on master node playground1
Mon Feb 18 04:16:51 UTC	Wait for system services to start on all nodes
Mon Feb 18 04:17:32 UTC	Bootstrap Kubernetes roles and PSPs
Mon Feb 18 04:17:34 UTC	Populate Docker registry on master node playground1
Mon Feb 18 04:18:19 UTC	Install system application dns-app:0.2.0
Mon Feb 18 04:18:20 UTC	Install system application logging-app:5.0.2
Mon Feb 18 04:18:29 UTC	Install system application monitoring-app:5.2.2
Mon Feb 18 04:18:53 UTC	Install system application tiller-app:5.2.1
Mon Feb 18 04:19:18 UTC	Install system application site:5.4.6
Mon Feb 18 04:20:55 UTC	Install system application kubernetes:5.4.6
Mon Feb 18 04:20:56 UTC	Install application beyondthekube:0.0.1
Mon Feb 18 04:20:59 UTC	Enable elections
Mon Feb 18 04:21:01 UTC	Operation has completed
Mon Feb 18 04:21:01 UTC	Installation succeeded in 6m15.298851759s
 $ 

Multi node

The installation is slightly more complicated to orchestrate with 3 nodes. In this case, a node called ‘playground1’ will run the installation and the others (playground2 and playground3) will join that installer. A token is used to permit the nodes to join

[playground1] $ mkdir ../inst
[playground1] $ cd ../inst
[playground1] $ scp ../app/beyoundthekube-0.0.1.tar playground2:[playground1] $ scp ../app/beyoundthekube-0.0.1.tar playground3:
[playground1] $ tar xf ../app/beyondthekube-0.0.1.tar 
[playground1] $ sudo ./gravity install --flavor=three --token=multinode
Mon Feb 18 05:09:47 UTC	Starting installer
Mon Feb 18 05:09:47 UTC	Preparing for installation...
Mon Feb 18 05:10:21 UTC	Installing application beyondthekube:0.0.1
Mon Feb 18 05:10:21 UTC	Starting non-interactive install
Sun Feb 17 23:10:21 UTC	Auto-loaded kernel module: br_netfilter
Sun Feb 17 23:10:21 UTC	Auto-loaded kernel module: iptable_nat
Sun Feb 17 23:10:21 UTC	Auto-loaded kernel module: iptable_filter
Sun Feb 17 23:10:21 UTC	Auto-loaded kernel module: ebtables
Sun Feb 17 23:10:21 UTC	Auto-loaded kernel module: overlay
Sun Feb 17 23:10:21 UTC	Auto-set kernel parameter: net.ipv4.ip_forward=1
Sun Feb 17 23:10:21 UTC	Auto-set kernel parameter: net.bridge.bridge-nf-call-iptables=1
Mon Feb 18 05:10:21 UTC	Still waiting for 3 nodes of role "node"
Mon Feb 18 05:10:22 UTC	Still waiting for 1 nodes of role "node"
Mon Feb 18 05:10:23 UTC	Still waiting for 1 nodes of role "node"
Mon Feb 18 05:10:24 UTC	All agents have connected!
Mon Feb 18 05:10:25 UTC	Starting the installation
Mon Feb 18 05:10:25 UTC	Operation has been created
Mon Feb 18 05:10:27 UTC	Execute preflight checks
Mon Feb 18 05:10:57 UTC	Configure packages for all nodes
Mon Feb 18 05:11:15 UTC	Bootstrap all nodes
Mon Feb 18 05:11:17 UTC	Bootstrap master node playground1
Mon Feb 18 05:11:25 UTC	Pull configured packages
Mon Feb 18 05:11:26 UTC	Pull packages on master node playground1
Mon Feb 18 05:15:50 UTC	Install system software on master nodes
Mon Feb 18 05:15:51 UTC	Install system package teleport:2.4.7 on master node playground2
Mon Feb 18 05:15:52 UTC	Install system package teleport:2.4.7 on master node playground3
Mon Feb 18 05:16:00 UTC	Install system package planet:5.4.7-11302 on master node playground3
Mon Feb 18 05:17:58 UTC	Wait for system services to start on all nodes
Mon Feb 18 05:19:25 UTC	Bootstrap Kubernetes roles and PSPs
Mon Feb 18 05:19:28 UTC	Export applications layers to Docker registries
Mon Feb 18 05:19:29 UTC	Populate Docker registry on master node playground1
Mon Feb 18 05:22:20 UTC	Install system applications
Mon Feb 18 05:22:21 UTC	Install system application dns-app:0.2.0
Mon Feb 18 05:22:22 UTC	Install system application logging-app:5.0.2
Mon Feb 18 05:22:43 UTC	Install system application monitoring-app:5.2.2
Mon Feb 18 05:23:31 UTC	Install system application tiller-app:5.2.1
Mon Feb 18 05:24:43 UTC	Install system application site:5.4.6
Mon Feb 18 05:29:56 UTC	Install system application kubernetes:5.4.6
Mon Feb 18 05:29:58 UTC	Install user application
Mon Feb 18 05:29:59 UTC	Install application beyondthekube:0.0.1
Mon Feb 18 05:30:39 UTC	Enable elections
Mon Feb 18 05:30:43 UTC	Operation has completed
Mon Feb 18 05:30:44 UTC	Installation succeeded in 20m22.974575584s
[playground1]$ 

Once the main installer is running, you can join the installation with the other two nodes (I have omitted palyground3, as the output is almost identical to playground2)

[playground2] $ mkdir ~/inst
[playground2] $ cd ~/inst
[playground2] $ tar xf ../app/beyondthekube-0.0.1.tar 
[playground2] $ sudo ./gravity join playground1 --token=multinode
Mon Feb 18 05:10:11 UTC	Connecting to cluster
Mon Feb 18 05:10:12 UTC	Connecting to cluster
Mon Feb 18 05:10:12 UTC	Connecting to cluster
Mon Feb 18 05:10:13 UTC	Connecting to cluster
Mon Feb 18 05:10:16 UTC	Connecting to cluster
Mon Feb 18 05:10:18 UTC	Connecting to cluster
Mon Feb 18 05:10:22 UTC	Connecting to cluster
Sun Feb 17 23:10:23 UTC	Auto-loaded kernel module: br_netfilter
Sun Feb 17 23:10:23 UTC	Auto-loaded kernel module: iptable_nat
Sun Feb 17 23:10:23 UTC	Auto-loaded kernel module: iptable_filter
Sun Feb 17 23:10:23 UTC	Auto-loaded kernel module: ebtables
Sun Feb 17 23:10:23 UTC	Auto-loaded kernel module: overlay
Sun Feb 17 23:10:23 UTC	Auto-set kernel parameter: net.ipv4.ip_forward=1
Sun Feb 17 23:10:23 UTC	Auto-set kernel parameter: net.bridge.bridge-nf-call-iptables=1
Mon Feb 18 05:10:23 UTC	Connected to installer at playground1
Mon Feb 18 05:10:24 UTC	Operation has been created
Mon Feb 18 05:10:27 UTC	Execute preflight checks
Mon Feb 18 05:10:57 UTC	Configure packages for all nodes
Mon Feb 18 05:11:15 UTC	Bootstrap all nodes
Mon Feb 18 05:11:17 UTC	Bootstrap master node playground1
Mon Feb 18 05:11:26 UTC	Pull packages on master node playground1
Mon Feb 18 05:15:50 UTC	Install system software on master nodes
Mon Feb 18 05:15:51 UTC	Install system package teleport:2.4.7 on master node playground2
Mon Feb 18 05:15:52 UTC	Install system package teleport:2.4.7 on master node playground3
Mon Feb 18 05:16:01 UTC	Install system package planet:5.4.7-11302 on master node playground3
Mon Feb 18 05:18:03 UTC	Wait for system services to start on all nodes
Mon Feb 18 05:19:25 UTC	Bootstrap Kubernetes roles and PSPs
Mon Feb 18 05:19:28 UTC	Export applications layers to Docker registries
Mon Feb 18 05:19:29 UTC	Populate Docker registry on master node playground1
Mon Feb 18 05:22:20 UTC	Install system applications
Mon Feb 18 05:22:21 UTC	Install system application dns-app:0.2.0
Mon Feb 18 05:22:22 UTC	Install system application logging-app:5.0.2
Mon Feb 18 05:22:43 UTC	Install system application monitoring-app:5.2.2
Mon Feb 18 05:23:31 UTC	Install system application tiller-app:5.2.1
Mon Feb 18 05:24:43 UTC	Install system application site:5.4.6
Mon Feb 18 05:29:57 UTC	Install system application kubernetes:5.4.6
Mon Feb 18 05:29:58 UTC	Install user application
Mon Feb 18 05:29:59 UTC	Install application beyondthekube:0.0.1
Mon Feb 18 05:30:38 UTC	Enable elections
Mon Feb 18 05:30:43 UTC	Operation has completed
Mon Feb 18 05:30:44 UTC	Joined cluster in 20m31.733348076s
[playground2]$ 

What is actually happening above is that playground1 is launching an installer and listening for other nodes to join. When we issue the gravity join command on playground2 and playground3 those nodes connect to playground1 using the token we passed in.

Post installation goodness

When the installation is done, you can inspect the gravity environment by running sudo gravity status. Enter the planet environment and inspect the kubernetes environment.

 $ sudo gravity status
Cluster status:	active
Application:	beyondthekube, version 0.0.1
Join token:	c47d9d1329eab6f624116591315b0841aca60f3ed2c6f2fd1a7a02d65111e2b3
Last completed operation:
    * operation_install (33072172-7852-47cc-80cf-e6d2dd10916c)
      started:		Mon Feb 18 04:14 UTC (10 minutes ago)
      completed:	Mon Feb 18 04:14 UTC (10 minutes ago)
Cluster:		awesomedarwin4747
    Masters:
        * playground1 (192.168.1.177, node)
            Status:	healthy
 $ sudo gravity enter
                                                     ___
                                                  ,o88888
                                               ,o8888888'
                         ,:o:o:oooo.        ,8O88Pd8888"
                     ,.::.::o:ooooOoOoO. ,oO8O8Pd888'"
                   ,.:.::o:ooOoOoOO8O8OOo.8OOPd8O8O"
                  , ..:.::o:ooOoOOOO8OOOOo.FdO8O8"
                 , ..:.::o:ooOoOO8O888O8O,COCOO"
                , . ..:.::o:ooOoOOOO8OOOOCOCO"
                 . ..:.::o:ooOoOoOO8O8OCCCC"o
                    . ..:.::o:ooooOoCoCCC"o:o
                    . ..:.::o:o:,cooooCo"oo:o:
                 `   . . ..:.:cocoooo"'o:o:::'
                 .`   . ..::ccccoc"'o:o:o:::'
                :.:.    ,c:cccc"':.:.:.:.:.'
              ..:.:"'`::::c:"'..:.:.:.:.:.'
            ...:.'.:.::::"'    . . . . .'
           .. . ....:."' `   .  . . ''
         . . . ...."'
         .. . ."'     -hrr-
        .
playground1:/$ kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
192.168.1.177   Ready    <none>   9m18s   v1.13.2
playground1:/$ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   5m41s

In addition to looking at the default namespace, gravity provides an alias kctl which is a quick way to run commands against the kube-system namespace. With that, we can see all of the services that gravity installs by default. This provides us with local docker registries for the cluster, monitoring, tiller – the helm server process.

playground1:/$ type kctl
kctl is aliased to `kubectl -nkube-system'
playground1:/$ kctl get all
NAME                                      READY   STATUS      RESTARTS   AGE
pod/gravity-site-79wc6                    1/1     Running     0          3m
pod/install-hook-8fef86-2w9lq             0/1     Completed   0          2m9s
pod/install-telekube-55c2a4-sp4sz         0/1     Completed   0          3m47s
pod/log-collector-697d94486-7glht         1/1     Running     0          4m38s
pod/log-forwarder-kc222                   1/1     Running     0          4m38s
pod/logging-app-bootstrap-c07491-wdsb2    0/1     Completed   0          4m45s
pod/monitoring-app-install-1d8e1d-7q6wk   0/1     Completed   0          4m37s
pod/site-app-post-install-4359d0-vjn2d    0/1     Completed   2          2m57s
pod/tiller-app-bootstrap-b443e8-7dx9s     0/1     Completed   0          4m12s
pod/tiller-deploy-69c5787759-g9td4        1/1     Running     0          3m49s

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                    AGE
service/gravity-site    LoadBalancer   10.100.197.83   <pending>     3009:32009/TCP             3m
service/log-collector   ClusterIP      10.100.186.31   <none>        514/UDP,514/TCP,8083/TCP   4m38s

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                      AGE
daemonset.apps/gravity-site    1         1         1       1            1           gravitational.io/k8s-role=master   3m
daemonset.apps/log-forwarder   1         1         1       1            1           <none>                             4m38s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/log-collector   1/1     1            1           4m38s
deployment.apps/tiller-deploy   1/1     1            1           3m49s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/log-collector-697d94486    1         1         1       4m38s
replicaset.apps/tiller-deploy-69c5787759   1         1         1       3m49s

NAME                                      COMPLETIONS   DURATION   AGE
job.batch/install-hook-8fef86             1/1           2s         2m9s
job.batch/install-telekube-55c2a4         1/1           50s        3m47s
job.batch/logging-app-bootstrap-c07491    1/1           7s         4m46s
job.batch/monitoring-app-install-1d8e1d   1/1           23s        4m37s
job.batch/site-app-post-install-4359d0    1/1           45s        2m57s
job.batch/tiller-app-bootstrap-b443e8     1/1           24s        4m13s
playground1:/$ 

Note that in multi-node clusters, there will be 3 instances of each system service:

playground1:/$ kctl get pods
NAME                                  READY   STATUS      RESTARTS   AGE
gravity-site-c25rk                    1/1     Running     2          17m
gravity-site-gjfvp                    0/1     Running     1          17m
gravity-site-vd8b7                    0/1     Running     2          17m
log-collector-697d94486-l9cx7         1/1     Running     0          20m
log-forwarder-8nqj9                   1/1     Running     0          20m
log-forwarder-cggjs                   1/1     Running     0          20m
log-forwarder-vpd4d                   1/1     Running     0          20m
tiller-deploy-69c5787759-v9vz5        1/1     Running     0          18m

Things to watch out for

Verbose logging

During the installation, the installer writes /var/log/telekube-install.log and /var/log/telekube-system.log. If you chose to watch those logs during installation, it is easy to get the impression that the installation is not working:

019-02-17T17:52:10-06:00 DEBU             Unsuccessful attempt 1/100: failed to query cluster status from agent, retry in 5s. install/hook.go:56
2019-02-17T17:52:15-06:00 DEBU             Unsuccessful attempt 2/100: failed to query cluster status from agent, retry in 5s. install/hook.go:56
2019-02-17T17:52:20-06:00 DEBU             Unsuccessful attempt 3/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:52:25-06:00 DEBU             Unsuccessful attempt 4/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:52:31-06:00 DEBU             Unsuccessful attempt 5/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:52:36-06:00 DEBU             Unsuccessful attempt 6/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:52:41-06:00 DEBU [KEYGEN]    generated user key for [root] with expiry on (1550483561) 2019-02-18 03:52:41.2294291 -0600 CST m=+36251.083685033 install/hook.go:56
2019-02-17T17:52:41-06:00 DEBU [AUDITLOG]  EmitAuditEvent(user.login: map[user:opscenter@gravitational.io method:local]) install/hook.go:56
2019-02-17T17:52:41-06:00 DEBU             [TELEPORT] generated certificate for opscenter@gravitational.io install/hook.go:56
2019-02-17T17:52:41-06:00 DEBU             Unsuccessful attempt 7/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:52:56-06:00 DEBU             Unsuccessful attempt 8/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56
2019-02-17T17:53:02-06:00 DEBU             Unsuccessful attempt 9/100: not all planets have come up yet: &amp;{unknown []}, retry in 5s. install/hook.go:56

The gravity installation logging is very verbose, and these messages are simply the installation waiting for kubernetes services to come up.

Firewall issues

If the installation is not working for you, attempt it without the firewall enabled. I’m not recommending running live without a firewall running in production, but at least rule it out before considering other issues. In a cloud environment, be sure to check rules built into the cloud provider. On premis, check your network firewalls in addition to the host firewall.

When you have achieved a working installation of gravity without the firewall, add it back in and make sure the recommended ports have been opened.

Conclusion

The Gravity system is a great option for admins that want to entrust the configuration and setup to a third party.

It is a very good option if your objective is to package up your application for a DMZ or to distribute to customers who may not have a good understanding of Kubernetes.

On the less positive side, the installer makes a lot of assumptions about your infrastructure that may be difficult to work around. Further to this, gravity works most seamlessly when you buy into the complete gravitational ecosystem. If you want to run with more esoteric settings or plugins, or you want to run enormous clusters, then gravity may not be the best fit for you.

In the coming weeks, I will be evaluating other methods of on-premis Kubernetes installation, and will eventually post a comparison of all of them.

Stay tuned for that. If you just got your gravity cluster up and running,

Identity management Part 3 :: Setting up OIDC authentication

In part 1 we installed an identity management service; Keycloak. Part 2 showed how to configure Keycloak against AD (or LDAP) with a quickstart option of simply adding a local user.

In this final part we will configure the kube-apiserver to use our identity management (IDM) service – OIDC Kubernetes.

Setting up Kubernetes

The easiest way to configure the kube-apiserver for any auth is to alter the command line arguments it is started with. The method used to do this differs depending on how you have installed kubernetes, and how the service is being started.

It is likely that there is a systemd unit file responsible for starting the kube-apiserver, probably in /usr/lib/system/ or possibly under /etc/systemd/system. Note that if your Kubernetes environment is containerized, as is the case with gravity, then you will first need to enter that environment before hunting for the unit file. In the case of gravity, run sudo gravity enter.

In the case of minikube, Kubernetes runs under Kubernetes defined as static pods. This self-referential startup is something we’ll cover in more detail in a future post, but for now all we really need to understand is how to edit the startup flags passed to kube-apiserver.

Because we cannot make persistent changes to the configuration files minikube copies into the virtual machine, or add certificates, id is easiest to run the oidckube.sh start script. I noticed a small bug in that script, so you will need to make the following changes for this to work:

diff --git a/oidckube.sh b/oidckube.sh
index 19b487c..ba09d58 100755
--- a/oidckube.sh
+++ b/oidckube.sh
@@ -64,7 +64,7 @@ init_minikube() {

 inject_keycloak_certs() {
   tar -c -C "$PKI_DIR" keycloak-ca.pem | ssh -t -q -o StrictHostKeyChecking=no \
-    -i "$(minikube ssh-key)" "docker@$(minikube ip)" 'sudo tar -x --no-same-owner -C /var/lib/localkube/certs'
+    -i "$(minikube ssh-key)" "docker@$(minikube ip)" 'sudo tar -x --no-same-owner -C /var/lib/minikube/certs'

 }

@@ -85,7 +85,7 @@ start_minikube() {
     --extra-config=apiserver.oidc-username-prefix="oidc:" \
     --extra-config=apiserver.oidc-groups-claim=groups \
     --extra-config=apiserver.oidc-groups-prefix="oidc:" \
-    --extra-config=apiserver.oidc-ca-file=/var/lib/localkube/certs/keycloak-ca.pem
+    --extra-config=apiserver.oidc-ca-file=/var/lib/minikube/certs/keycloak-ca.pem
 }

 main() {

You can see in the inject_keycloak_certs function the script adds the keycloak-ca.pem file. In a real production setup, this would probably me a real CA, or the root CA of the organization.

The oidc script also runs minikube with a bunch of arguments to ensure that the Keycloak server is used for authentication:

$ minikube start \
--extra-config=apiserver.oidc-client-id=kubernetes \
--extra-config=apiserver.oidc-username-claim=email \
--extra-config=apiserver.oidc-username-prefix=oidc: \
--extra-config=apiserver.oidc-groups-claim=groups \
--extra-config=apiserver.oidc-groups-prefix=oidc: \
--extra-config=apiserver.oidc-issuer-url=https://keycloak.devlocal/auth/realms/master \
--extra-config=apiserver.oidc-ca-file=/var/lib/localkube/certs/keycloak-ca.pem

The options to the –extraconfig flags mimic the command line flags that should be added to the kube-apiserver binary. In order to mimic this minikube setup in a larger cluster, you need to edit the systemd unit file, or the static pod difinition – depending on how the your cluster starts the apiserver.

--oidc-client-id=kubernetes
--oidc-username-claim=email
--oidc-username-prefix=oidc:
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
--oidc-issuer-url=https://keycloak.devlocal/auth/realms/master
--oidc-ca-file=/path/to/your/rootca.pem # optional if the CA is public

Once you are satisfied that the new arguments are being used by your kube-apiserver process, it’s time to test out the new auth.

Testing it out

With everything configured on the server side, it’s time to make sure things are working on the client end. There are a few different assets that need to be in place in the configuration for kubectl.

The first thing to consider is how to manage updating the kubeconfig.

Some people want to manage one single kubectl configuration with different contexts driving the parameters needed for different clusters. Others like to use the environment variable KUBECONFIG to maintain many different files in different locations. In this post, we’re going to be using the more self-contained approach of KUBECONFIG, but I recommend using contexts in general and will cover that in a future post.

In addition to the new file, I would recommend the use of a tool like kubelogin (go get github.com/int128/kubelogin) to assist with the handoff between keycloak and kubectl. That tool starts local server and opens a browser against it. The user authenticates to keycloak and, if successful, the kubelogin server receives the callback and updates the KUBECONFIG file with the JWT tokens.

You probably have a working configuration for the cluster already – either minikube or some other Kubernetes installation. Either way, you will probably already have a kubeconifg in .kube/config. In order to get a quick start, just copy .kube/conifg to .kube/oidcconfig and remove the client-certificate: and client-key: fields completely.

Once built, kubelogin is available in $GOROOT/bin/kubelogin, but you can move it wherever you want.

Now we have the kubelogin tool available, just setup kubectl to point to the netw conifg and run kubelogin. It will guide you through the process from there:

$ cp ~/.kube/config ~/.kube/oidcconfig
$ kubectl config set-credentials minikube \
   --auth-provider oidc \
   --auth-provider-arg client-id=kubernetes \
   --auth-provider-arg idp-issuer-url=https://keycloak.devlocal/auth/realms/master                       
$ ~/go/bin/kubelogin

When you have logged into keycloak via the web browser, kubelogin will exit and update your kubectl configuration.

---
apiVersion: v1
clusters:
- cluster:
    certificate-authority: ca.crt
    server: https://192.168.99.102:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    auth-provider:
      config:
        client-id: kubernetes
        id-token: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCbGtiVXFVSkdRV09fWEFNeXhDUm84Q01iVy1GQ1FzeXVSLTBhUUZiaUJ3In0.eyJqdGkiOiI1NzY0NGNiOC1jOWM3LTQ0MTMtYThiZi0wYTRjM2EyZGIxNTQiLCJleHAiOjE1NDk4NDAxMjAsIm5iZiI6MCwiaWF0IjoxNTQ5ODQwMDYwLCJpc3MiOiJodHRwczovL2tleWNsb2FrLmRldmxvY2FsL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6Imt1YmVybmV0ZXMiLCJzdWIiOiIyY2JkNmIzYi1hMWE4LTQxZGYtYTE3ZC03NTYzYzhiZDE1MGEiLCJ0eXAiOiJJRCIsImF6cCI6Imt1YmVybmV0ZXMiLCJhdXRoX3RpbWUiOjE1NDk4NDAwNTYsInNlc3Npb25fc3RhdGUiOiI2ZjJiM2UwZS1hNzY0LTQxODktYjliNi1kMzBhZTRhMWYzYTQiLCJhY3IiOiIxIiwibmFtZSI6InRlc3QgdXNlciIsInByZWZlcnJlZF91c2VybmFtZSI6InRlc3R1c2VyIiwiZ2l2ZW5fbmFtZSI6InRlc3QiLCJmYW1pbHlfbmFtZSI6InVzZXIiLCJlbWFpbCI6InRlc3R1c2VyQGJleW9uZHRoZWt1YmUuY29tIn0.Y3y0GEPsPHaibRNzp0AQ-pAV-b8K5m9rwGe512QKCHINrMu5jrfe1XCnl5qry6coUPSG_nNwDOB8WxFNqW-lTp_rbZ_auz2xy93L2bs1Pb_3QGwHFRXfAP_AZJagCSp8JC3mpopHRvsnuxi4yR4hqNln85a62jBshK-9QEpgR9mRUcSs2PdOicrPvqP0hMMHzOTYsEcsk-YaGxhPqDQJTHzuCa_8fgx6OG2vycG392Vrr1p5RhUg3lUmTv7nYOHnqkhZQLWCDkcndWt8sBiGVOkyJeDsuy0d8QNLP-9Z-BGip-RiZdmpaA4E2LmKO6CK54eo1i2zRSgN1Odm0316cg
        idp-issuer-url: https://keycloak.devlocal/auth/realms/master
        refresh-token: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCbGtiVXFVSkdRV09fWEFNeXhDUm84Q01iVy1GQ1FzeXVSLTBhUUZiaUJ3In0.eyJqdGkiOiJmMzE5YWFmZC03ZWY5LTQ0OWQtYTljNS1jYjc5YjRhMmRlMzIiLCJleHAiOjE1NDk4NDE4NjAsIm5iZiI6MCwiaWF0IjoxNTQ5ODQwMDYwLCJpc3MiOiJodHRwczovL2tleWNsb2FrLmRldmxvY2FsL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6Imt1YmVybmV0ZXMiLCJzdWIiOiIyY2JkNmIzYi1hMWE4LTQxZGYtYTE3ZC03NTYzYzhiZDE1MGEiLCJ0eXAiOiJSZWZyZXNoIiwiYXpwIjoia3ViZXJuZXRlcyIsImF1dGhfdGltZSI6MCwic2Vzc2lvbl9zdGF0ZSI6IjZmMmIzZTBlLWE3NjQtNDE4OS1iOWI2LWQzMGFlNGExZjNhNCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX19.Fb8_P_UVGDsR9hPYm6pbHbU3AWavS9DLjOxDUdTsEPn2gcLZi0e42GbJCFZMz3sLliTBizxFXGAbaR4c6jS3wO5mZgIIa-ek9OgX1Qo7jsI3w0NegFXdFJHG2HgXr_T8gYcHFkG3FP7j7qi8z52GKfA1T1M_Ki97ovUfLJGu0CxfnCFNpXz3xfIj8tmuV_QZc9_s9jVgXyaQfyq0QYyNbMCtgFG1AZkv70ycUoJ6EtB3R7HOGUPd5MQjtih7GIal8E3U9PS5yp_DWSBig10T-wYKiViEiILxoXO90CF2n-4v8Q3P6YZ6HL-CtjHK4XkHi2jHEJGFqxeeXXBTgVUAUA
      name: oidc

If you take the content of the id_token: field and paste it into http://jwt.io you can decode the token into human-readable form:

{
  "jti": "57644cb8-c9c7-4413-a8bf-0a4c3a2db154",
  "exp": 1549840120,
  "nbf": 0,
  "iat": 1549840060,
  "iss": "https://keycloak.devlocal/auth/realms/master",
  "aud": "kubernetes",
  "sub": "2cbd6b3b-a1a8-41df-a17d-7563c8bd150a",
  "typ": "ID",
  "azp": "kubernetes",
  "auth_time": 1549840056,
  "session_state": "6f2b3e0e-a764-4189-b9b6-d30ae4a1f3a4",
  "acr": "1",
  "name": "test user",
  "preferred_username": "testuser",
  "given_name": "test",
  "family_name": "user",
  "email": "testuser@beyondthekube.com"
}

Finally, now that we have inspected our id_token, we can offer it up to the kube-apiserver to see if it knows who we are:

$  kubectl get pods
Error from server (Forbidden): pods is forbidden: User "oidc:testuser@beyondthekube.com" cannot list resource "pods" in API group "" in the namespace "default"
$

Although the outcome here is unexciting – we’re not actually allowed to interact with the cluster – this does show the user prefixing (oidc:) working, in addition to the kube-apiserver recognizing the username.

Stay tuned for more on how to authorize users to do stuff with the cluster – I’m planning to cove a few different options for that.

Identity management Part 2 :: Configuring Keycloak

Back in part 1, we installed Keycloak on top of Kubernetes. Now we want to configure it to generate OIDC tokens based on our (hopefully) existing authentication backend.

In this example, I’m going to use Active Directory, but the setup is similar for and LDAP, and Keycloak also supports most cloud identity providers, plain SAML and so on.

No user identity system?

If you are following along at home, and do not have an existing identity provider like Active Directory, then simply skip the ‘Configuring AD’ section and add a user directly in Keycloak. You can do this by clicking Manage->Users, then ‘Add user’. Make sure you turn ‘Email Verified’ on.

Once the user is created, you can impersonate them and set a password in the Keycloak admin console. See the keycloak documentation for more details.

Now skip over to ‘Configuring OIDC’

Configuring AD

Configuring Keycloak to federate AD users requires logging into the admin console and clicking on ‘User Federation’

User federation options

Select LDAP as your provider.

You need to have your AD setting to hard, and this will differ depending on your setup. Most of these parameters are standard to any AD integration, so you should be able to reference them or ask for them from an AD administrator.

When you have entered all of the required values and tested the connection, you can save this setup.

Configuring OIDC

Add an OIDC client configuration

Start by clicking ‘Create’ to add a new client

Client list

Give your client a name (it can be anything you want.) Click ‘Save’

Force enable verified email

Kubernetes will refuse to work with any identity for which the email is not verified, so we now need to create a mapping to force that. We can do this because we presumably trust that the emails contained within our Active Directory are real.

From the clients list, click the name of your newly-created client. Now click the ‘Mappers’ tab, and finally ‘Create’

Mappers of the client

Enter the values as below, and click ‘Save’

That’s it! In Part 3 we will tie this all back to the kube-apiserver and test authentication.

Identity management Part 1 :: Installing Keycloak

One of the first challenges associated with owning the control plane of Kubernetes is that you are responsible for authz and authn.

While there are some great resources for getting set up with some form of identity management, ultimately the correct solution will depend on your existing on-premise setup. If you already have something like Active Directory or LDAP configured, then it probably makes sense to integrate with those.

Thankfully, Kubernetes provides a rich set of possibilities for authentication; from standards like OIDC right through to just sharing a token to figure out who the current user is.

Objective

In this post, I’ll demonstrate how to setup OIDC authentication with the kube-apiserver, which will allow user identity to be established. If you’re lucky, your enterprise will already have an OIDC provider such as Okta., or the Google IAM services. Even if you’re starting from scratch, it’s relatively simple to set up a bridge to LDAP or AD.

In order to follow along, you’ll need a Kubernetes cluster on which you are able to run kubectl.

The procedure for getting keycloak up and running on top of kubernetes is relatively well documented elsewhere, so I’ll assume you’re able to get that up and running. If you’re using minikube for this, then I would recommend this project.

Setting up access to the service

Once you have a Keycloak pod running, you’ll navigate to the Keycloak end. Given that we have no solution for ingress yet, our best option is to expose the service on a nodeport.

This is achieved by adding some definitions to the existing service definition. The default keycloak service definition will be something like this:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  type: ClusterIP

Nodeports cannot be added to services of type ClusterIP, but we can take advantage of Kubernetes service abstraction to simply create a new service for our nodeport. We will start by grabbing the existing one:

$ kubectl get svc keycloak -o yaml &gt; keycloak-nodeport.yaml

Now we can simply edit it to look like this (notice how similar these services look)

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: keycloak
    component: keycloak
  name: keycloak-nodeport
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    nodePort: 32080
    protocol: TCP
    targetPort: http
  selector:
    app: keycloak
    component: keycloak
  sessionAffinity: None
  type: NodePort

In fact, the diff is tiny:

7c7
<   name: keycloak
---
>   name: keycloak-nodeport
10d9
<   clusterIP: None
13a13
>     nodePort: 32080
20c20
<   type: ClusterIP
---
>   type: NodePort

Accessing the Keycloak service

If you used the minikube and oidckube script, then you should have keycloak available at https://keycloak.devlocal. You should see this page, and be able to click through to the Administration Console.

Keycloak landing page
Keycload admin console login page

The username is ‘keycloak’ and the password is ‘keycloak.’ It is hashed and available in a secret resource:

$  kubectl get secret  keycloak-admin-user -o yaml
 apiVersion: v1
 data:
   password: THIS_IS_THE_PASSWORD_HASH
 kind: Secret
 metadata:
   labels:
     app: keycloak
     component: keycloak
   name: keycloak-admin-user
   namespace: default
 type: Opaque

So we now have Keycloak installed and running. Stick around for Part 2.

Installing Kubernetes on-prem

There are several methods of getting Kubernetes installed on physical tin, or just on top of your favorite OS. Many developers will probably have some familiarity with minikube. That is a great self-contained quick start configuration for hacking on your laptop.

That’s not really what we want here. We want a fully featured multi-node cluster installed on physical hosts. There are a number of ways to achieve that. They range in terms of flexibility vs convenience, vendor tie-in and so on.

Over the coming weeks, I’ll be providing more in-depth evaluations of some of these implementations. For now here are the resources:

Manual and semi-manual

Kubernetes the hard way is a really complete introduction to getting Kubernetes up and running entirely manually. It’s a great starting point if you want to understand the dirty details of how the components hang together.

kubespray is an ansible repo that does the heavy lifting for you. You simply need to provide your ansible inventory and away you go.

kops is a neat little cli for setting up clusters and managing them throughout the lifecycle.

Packaged

Gravity is an installer and containerized kubernetes offering from Gravitational. The stack includes helpful built-in components like tiller. In addition it provides a full cluster management service (as part of the paid-for enterprise version)

Rancher is also a pre-packaged Kubernetes offering from the company of the same name.

Openshift is the RedHat offering of Kubernetes.

On prem Kubernetes is a fast-growing area. I’m sure I have not captured all of the available offerings. If you feel like I’ve missed an important one, and you’d like to see it evaluated, leave a commend and I’ll add it to the mix.