In this guide we will install and configure a multi-node Kubernetes environment for deploying microservices on your own infrastructure.
Kubernetes is a highly efficient system for orchestrating containers. This stack is pre-loaded with a standard Kubernetes, installation installed directly from Google’s latest .deb’s.
From the management interface of the stack you can make IPv4 addresses available for your Kubernetes services – these are provided using the exellent bare-metal load-balancer MetalLB.
The stack also makes persistent storage available to your pods. Two kinds of storage classes are provided – NFS storage using the storage pools that are configured in your Origo OS installation, and local storage using a dedicated volume (backed by a qcow2 image) on each of the Kubernetes nodes (which are of course Origo OS servers). In the management interface you can specify which storage class is the default storage class.
The stack installs one node besides the control plane node. From the management interface you can add more nodes, by adding servers to your stack – they join the cluster automatically. You can also delete servers/nodes, but remember to drain them first, and then remove them from your cluster, after removing the virtual server.
The above are pretty much the only custom modifications made to the Kubernetes installation. You can see exactly how the cluster is built, by looking through the stack source code.
Go to the dashboard and click “Install Stack”. This will open the install dialog. From the dropdown select “Kubernetes”, give your Kubernetes cluster a name, and then simply click “Install”. The dialog should now show a progress bar while the server is being installed, and the server’s built-in UI is loaded.
Optionally you can uncheck “Hide preconfigured settings” and choose how your nodes should be configured – you can e.g. add more RAM or vCPU’s if you have this available.
While you wait for the Kubernetes cluster to be configured (you can follow the progress in the Kubernetes tab), set your Linux password in the Security tab.
Once your Kubernetes cluster is configured, the Kubernetes tab will load a basic UI for managing your cluster. Set a password for the Dashboard UI, then click the link to the dashboard below the password field, and log into the Kubernetes Dashboard UI. Hopefully everything is green.
Click “go -> to the online terminal” in the upper right corner of the stack UI and log in to the control plane server. Then type:
sudo kubectl apply -f /root/metalllb-test.yaml
Now type:
sudo kubectl get services
You should see a service of type “LoadBalancer” with “EXTERNAL-IP” in state “Pending”. Now go to the Kubernetes tab in the stack UI, click “IPv4 addresses” and the click “add ip address mapping”, to assign a mappped ip address to your LoadBalancer.
The ip address should be grabbed immediately. Verify that things are working by clicking the link to the external ip address, which should look something like:
A web page with a Kubernetes logo should load. Please note that this assumes the external IP address is reachable from your IP address. You may have to go back to the dashboard in web UI and reconfigure your mapped IP address range, and repeate the above. Remember that in general you can remove things you have added to Kubernetes using a yaml file, by reversing the operation like e.g.:
sudo kubectl delete -f /root/metalllb-test.yaml
To start a pod, create a persistent volume claim using NFS storage, and mount the storage inside a container, type:
sudo kubectl apply -f /root/nfs-storage-test.yaml
To mount the available Origo OS storage pools to the control plane virtual server type:
sudo stabile-helper mountpools
This is the same storage that is made available to your containers.
To verify that the pod did in fact mount and write to the NFS storage, type:
cat /mnt/fuel/pool0/*/index.html
The text “<h1>The remote NFS volume is mounted!</h1>”, which was written from inside the continer, should be displayed, if things are working.
To start a pod, create a persistent volume claim using local node storage, and mount the storage inside a container, type:
sudo kubectl apply -f /root/local-storage-test.yaml
This works much like above – a small text file is written to the mounted storage from inside the container. We could of course figure out which node the pod was started on, but let’s instead head over to the stack UI, go to the “Commands” tab. In the commands field, type:
cat /mnt/local/*/test.txt
Click on the green buttons displaying “x.x.x.x ran command successfully”, to see the results for each node. One should hopefylly dislay the text “The local volume is mounted!”.
That’s it for now. We expect to update the Kubernetes stack regularly, and will update this guide accordingly.