Skaffold Helm supports automatical deployment of composed and configured Helm charts to a local or remote Kubernetes cluster environment.
The daily mood
I had the opportunity to assist a status meeting from colleagues working on a new Data Lake project on AWS. The team basically consists in an engineering manager, a software engineer, 2 data scientists and a SRE. The first use case is a REST endpoint able to predict the duration of a given process. They struggled a bit with inter-VPC-security as the data to analyse was actually ingested on a different account than the data access and analysis layer.
In parallel I am still focused on our software delivery process for Kubernetes, with in mind the goal to rampup on related tools, and potentially, give an improvement proposal back to our team and organization. In a previous post on Skaffold, we have seen how easy it is to setup a project and automatically build/deploy applications to Kubernetes. Our example was based on a kubectl pipeline. Today we are going to implement a Helm chart pipeline.
Source: Medium.com
Use-case
In terms of atomicity, our immutable source is not a Docker image but a Helm chart. This is already a good thing because the way Skaffold builds and tags images does not allow us to specify any image tag in a Helm chart spec, which would actually force us to re-write any chart accordingly.
The composition of multiple Helm charts (stack) should be managed in a flexible way, their configuration mainly inherited from an environment (ex. dev), and allow for custom settings. This feature can be achieved with Skaffold profiles and value overriding (supported as key-value pairs, file and template).
Implementation
We'll use Apache Airflow as the basis Helm chart to deploy. Indeed I already mentioned this data flow orchestration tool in a previous post and so I wanted to play with it.
# we'll test a local revision of the chart $ helm fetch --version=8.6.4 --untar=true stable/postgresql $ helm fetch --version=7.1.5 --untar=true stable/airflow
$ vi skaffold.yaml
# skaffold.yaml
apiVersion: skaffold/v2beta4 kind: Config metadata: name: --skaffold deploy: helm: releases: - name: postgresql-release namespace: test chartPath: ./postgresql version: 8.6.4 setValues: image.tag: 11-debian-10 #postgresqlDatabase: airflow pgHbaConfiguration: |- local all all trust host all all all trust - name: airflow-release namespace: test chartPath: ./airflow version: 7.1.5 setValues: postgresql.enabled: false externalDatabase.host: postgresql-release externalDatabase.database: postgres externalDatabase.user: postgres externalDatabase.passwordSecret: postgresql-release
Test
sensible-browser http://`kubectl describe -n test pod $(kubectl get -n test pods | grep airflow-release-web | cut -d' ' -f1) | grep IP: | awk '{print $2}'`:8080
According to the Apache Airflow quickstart, we'll need to connect to the server container in order to import example DAGs.
Conclusion
We were able to easily compose, configure and deploy two different Helm charts using Skaffold. The configuration allows profiling by environment and can be exported to external value files. Also, the new templated fields functionality sounds promissing for solving more complex problems.
References
Comments
Post a Comment