Introduction
Kubespray is different to the recently evaluated Gravity deploy. It uses ansible to drive deployment on the base system, rather than attempting to package all of the kubernetes components into a container
If it is not obvious, Kubespray makes extensive use of connectivity to the internet in order to download and configure a lot of the required assets, and command line tools. This is not always workable in an on-premis environment.
Installation
In order to get started, clone the official kubespray repo:
$ git clone https://github.com/kubernetes-sigs/kubespray.git Cloning into 'kubespray'... remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (4/4), done. remote: Total 33233 (delta 1), reused 2 (delta 1), pack-reused 33228 Receiving objects: 100% (33233/33233), 9.67 MiB | 14.10 MiB/s, done. Resolving deltas: 100% (18561/18561), done. $
You will need to edit the inventory file to reflect the hosts that you wish to install the Kubernetes cluster on. If this is a large production installation, you may want to take a look at dynamic inventories for Ansible.
In my case, I’m planning to install the cluster on three nodes named playground[1-3] so I’ve edited the inventory file to reflect that:
Creating the inventory
$ cat inventory/local/hosts.ini playground1 ansible_connection=local local_release_dir={{ansible_env.HOME}}/releases [kube-master] playground1 [etcd] playground1 [kube-node] playground1 playground2 playground3 [k8s-cluster:children] kube-node kube-master $
Installing the kubespray requirements
Next, make sure that you have all of the requirements required:
$ sudo pip install -r requirements.txt
Customization
You may want to review and possibly change some of the variables in the group_vars directories.
$ find inventory/local/group_vars/ inventory/local/group_vars/ inventory/local/group_vars/all inventory/local/group_vars/all/all.yml inventory/local/group_vars/all/azure.yml inventory/local/group_vars/all/coreos.yml inventory/local/group_vars/all/docker.yml inventory/local/group_vars/all/oci.yml inventory/local/group_vars/all/openstack.yml inventory/local/group_vars/etcd.yml inventory/local/group_vars/k8s-cluster inventory/local/group_vars/k8s-cluster/addons.yml inventory/local/group_vars/k8s-cluster/k8s-cluster.yml inventory/local/group_vars/k8s-cluster/k8s-net-calico.yml inventory/local/group_vars/k8s-cluster/k8s-net-canal.yml inventory/local/group_vars/k8s-cluster/k8s-net-cilium.yml inventory/local/group_vars/k8s-cluster/k8s-net-contiv.yml inventory/local/group_vars/k8s-cluster/k8s-net-flannel.yml inventory/local/group_vars/k8s-cluster/k8s-net-kube-router.yml inventory/local/group_vars/k8s-cluster/k8s-net-weave.yml $
If you have not used Ansible before, it works over ssh and requires the user running it to have passwordless ssh access to all of the nodes. In the case of kubespray, this user is root, so you need to have a strategy to make that work — at least for installation time.
Running the install
$ sudo bash # ansible-playbook -i inventory/local/hosts.ini cluster.yml
After running the installation, a slow 10 minutes on my test setup, we get to the end of the Ansible run:
PLAY RECAP ********************************************************************************************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 playground1 : ok=404 changed=126 unreachable=0 failed=0 playground2 : ok=253 changed=76 unreachable=0 failed=0 playground3 : ok=254 changed=76 unreachable=0 failed=0 Monday 25 February 2019 18:36:02 -0600 (0:00:00.027) 0:10:49.135 ******* =============================================================================== download : container_download | download images for kubeadm config images ---------------------------------------------------------------------- 39.74s download : file_download | Download item ------------------------------------------------------------------------------------------------------- 38.43s kubernetes/master : kubeadm | Initialize first master ------------------------------------------------------------------------------------------ 28.25s kubernetes/node : install | Copy hyperkube binary from download dir ---------------------------------------------------------------------------- 27.08s container-engine/docker : ensure docker packages are installed --------------------------------------------------------------------------------- 25.38s download : file_download | Download item ------------------------------------------------------------------------------------------------------- 19.69s kubernetes/kubeadm : Join to cluster ----------------------------------------------------------------------------------------------------------- 16.65s kubernetes/preinstall : Update package management cache (YUM) ---------------------------------------------------------------------------------- 16.56s download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------- 11.67s download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------- 11.57s container-engine/docker : Docker | pause while Docker restarts --------------------------------------------------------------------------------- 10.08s kubernetes/kubeadm : Restart all kube-proxy pods to ensure that they load the new configmap ----------------------------------------------------- 9.44s kubernetes/node : install | Copy kubelet binary from download dir ------------------------------------------------------------------------------- 9.20s etcd : Configure | Check if etcd cluster is healthy --------------------------------------------------------------------------------------------- 7.34s container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ------------------------------------------------------------- 7.20s download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 7.18s download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 6.75s kubernetes/preinstall : Install packages requirements ------------------------------------------------------------------------------------------- 6.58s download : container_download | Download containers if pull is required or told to always pull (all nodes) -------------------------------------- 6.47s kubernetes/node : install | Write kubelet systemd init file ------------------------------------------------------------------------------------- 6.44s
Playing with the cluster
# kubectl get nodes NAME STATUS ROLES AGE VERSION playground1 Ready master,node 4m9s v1.13.3 playground2 Ready node 3m43s v1.13.3 playground3 Ready node 3m43s v1.13.3 # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8594b5df7b-26gn4 1/1 Running 0 3m18s calico-node-424g9 1/1 Running 0 3m25s calico-node-8wpfc 1/1 Running 0 3m25s calico-node-f4jbh 1/1 Running 0 3m25s coredns-6fd7dbf94c-9wtjj 1/1 Running 0 2m40s coredns-6fd7dbf94c-ldvcj 1/1 Running 0 2m44s dns-autoscaler-5b4847c446-m2df7 1/1 Running 0 2m41s kube-apiserver-playground1 1/1 Running 0 4m40s kube-controller-manager-playground1 1/1 Running 0 4m40s kube-proxy-ldntn 1/1 Running 0 3m10s kube-proxy-skvcq 1/1 Running 0 3m1s kube-proxy-tj2bk 1/1 Running 0 3m21s kube-scheduler-playground1 1/1 Running 0 4m40s kubernetes-dashboard-8457c55f89-pjn9g 1/1 Running 0 2m39s nginx-proxy-playground2 1/1 Running 0 4m27s nginx-proxy-playground3 1/1 Running 0 4m27s #
Kubespray helpfully creates and populates a Kubeconfig file with embedded certs for client identity. This is useful, but should probably be thought of as ‘the root user’ of the Kubernetes cluster.
In order to grant additional access to the cluster, now would be a good time to look into identity management and role based access control
Adding additional nodes
Adding a node is done by adding to the inventory and running ansible-playbook with the -l option to limit the install to the new nodes. All of the certificate handling is performed under the hood by kubespray.
Conclusion
If you are familiar with Ansible, then Kubespray is a great way to go; it encapsulates all of the complexity of orchestrating a cluster into a single command to run.
There are downsides: Kubespray is hiding a lot of complexity here, and ultimately it is useful to understand in detail what is going on. As stated earlier in the post, it’s also an absolute necessity to have relatively wide-open access to the internet for Kubespray to work. Of course, this is plain old Ansible, so it is possible to alter Kubespray to work with in-house repos if needed.
I found Kubespray surprisingly easy to get along with, and would recommend it for those wanting to get up and running on bare metal if internet access is not an issue for you.