007: Packaging Kubernetes applications with Helm charts Skip to main content

007: Packaging Kubernetes applications with Helm charts

Helm is a template-based spec format for configuring, packaging and deploying applications that consist in multiple resource objects to Kubernetes.

The daily mood

My last posts were about setting up a Kubernetes environment. We'll now learn the tooling. In general i am not yet much familiar with common developer productivity tools like Z-ShellGists and Lints, but i assume this will come with the time. Now we are going to look into the packaging of Kubernetes applications.
What is Helm

Helm is a commonly used package manager for Kubernetes. It is not the only one option for managing your Kubernetes applications but the one which happens to reach critical mass at enterprise grade, just like Docker did for Containerization and Kubernetes for Orchestration. One of the reason for its great popularity is the Helm Hub, the official Helm public repository which is hosting tons of standard packages.
At its core, Helm generates Kubernetes manifests out of an own DSL. A Helm chart is actually a collection of configurable Kubernetes resources called templates, along with their configuration values. A value can be a standard resource parameter (e.g. namespace), a label or an image environment variable (e.g. MYSQL_ROOT_PASSWORD). Once built, the chart becomes a versioned archive called package. Such a package shall be stored in a Helm repository. Once deployed, it becomes a release.
In V2 Helm client communicates with the cluster via a server-side component called Tiller, which is not a 100% secure. In V3, Helm gets rid of Tiller (i.e. "tillerless").

Helm client

Setup Helm in respect of our requirement of using version 2.14.3.

You may use microk8s embedded client (no config required):
sudo snap alias microk8s.helm helm
Recommended is to alternatively download and install from official release:
wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
tar xf helm-v2.14.3-linux-amd64.tar.gz && rm helm-v2.14.3-linux-amd64.tar.g
chmod +x helm && sudo mv helm /usr/local/bin/
Extend bash_completion to enable command line assistance.
sudo su -
helm completion bash > /etc/bash_completion.d/helm
In case you are using GitHub actions, the hrval-action can be very usefull for Helm chart developer to verify scripts before build.

Helm server

Tiller is the server-side API for Helm, running as a pod which gets installed on microk8s by running
microk8s.enable helm
Tiller is automatically created by the init command, thus it is common practice to run it as service account with cluster-admin role binding:
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:tiller
helm init --skip-refresh --upgrade --service-account tiller --history-max 10
In case of version upgrade, we'll need to replace our client and run last command again, which one will update Tiller to the same API version as the client. 
Note that this configuration is not recommended for production. Instead of sharing a kube-system Tiller, it is also recommended to use one Tiller by namespace. 

Helm in action

When specifying a chart to install, Helm looks for it within available public/private repositories. The following example pulls and deploys a public Kafka chart to your cluster.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm repo list
helm install --name myRelease bitnami/kafka
helm list
helm status myRelease
Once done, you may want to delete all Kubernetes resources bounded to the release at once like this:
helm delete myRelease
A Kafka release is usually based on two distinct container images (Kafka-broker and Zookeeper) and is therefore a good example for comparing its composition using Docker compose vs. using Helm chart.

Own chart

Creating your own chart is very straight-forward provided that you are familiar with Kubernetes.
helm create nginx-chart
tree nginx-chart
.
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml
As we can see, a Helm chart was automatically created and is ready to use. It consists in a comprehensive folder structure with Kubernetes ressource definitions under templates. You can add, replace or change definitions there. Note that current one is almost the nginx application that we created from scratch in a previous post, with the difference that some variable parameters have been exported into values.yaml.
cd nginx-chart
# install chart
helm install nginx-chart --name nginx-release
# wait for application to deploy
kubectl rollout status deployment/nginx-deployment
Since those two concepts may overlap, it may not be clear wether a values.yaml should replace a configmap.yaml. A configmap.yaml is owned by the application developer focused on functional settings, while a value.yaml is owned by the application operator focused on non-functional settings. A configmap.yaml is a resource object that lives inside Kubernetes and that can be shared by multiple pods within a given namespace. A values.yaml is a just a configuration asset living inside a repository and used only at the time you want to change resources. Typically in a multi-stage environment, which is supported in a very convenient way with Helm chart values.yaml. 

Note that release tests included in our chart are automatically run during build. You can re-run them any time after:
helm test helm-release
The values.yaml can be overriden via command line tags while triggering the build, before they are injected into a temporary aggregate copy of template files. In the example below, replicaCount is changed from 1 to 2.
helm ls                                                                  # revision 1
kubectl describe deployment/nginx-release-nginx-chart | grep Replicas:   # 1
helm upgrade nginx-release nginx-chart --set replicaCount=2
helm ls                                                                  # revision 2
kubectl describe deployment/nginx-release-nginx-chart | grep Replicas:   # 2
You can look at the revision history and rollback from 2 to 1.
helm history helm-release
helm rollback helm-release 1
You may want to remove all resources installed from the chart.
helm delete --purge helm-release
Note: You can use helm upgrade --install if you are not sure the release is already installed. As far as i know, it is not possible to extend a Kubernetes resource e.g. with a new label at build time.

Helm plugin

Finally i have setup our custom Helm plugin, basically a wrapper for making engineer's life easier when having to deploy multiple charts at once. The plugin also offers init and provisioning commands taking care of verifying configuration, installing custom repositories, and creating user accounts.

Next

This topic is a big one which will accompaign me for a while. So i'll probably have to go further into details of standard Helm templating and of our internal Application configuration.

Comments