FluxCD GitOps operator supports Kustomize for dynamically featuring Kubernetes resources including HelmRelease. This post is about parametrization.
The daily mood
I got in touch with another team mate, also well aware of, and willing to help on Helm deployments. He gave me some precious pointers on our current solution (based on Ansible) which allowed me some additional level of understanding. He also pointed out that following to the past PoC on Flux, HelmReleases were introduced as the future solution, but had the negative effect to make life even more complicated for developers, and configurations even more replicated by SRE. He suggested myself to study the possibility of using Kustomize to feature HelmReleases, so that replications can be avoided. In fact I had not yet considered Helm and Kustomize as a potential fit so far, except if we wanted to render Kubernetes resources first, then configure them as described in this article. I realised about how many different tools we were talking and then designed the following summary chart.
Disclaimer: I might have missed some tools which could be very relevant for our purpose (see article 15+ usefull Helm Charts tools from Caylent), but as matter of fact, there are also external constraints I must live with, like past evaluations and decisions, as well as limited time and experience.
Requirement
Based on an application called "portal", check if we can reach the last mile of our past PoC on Flux. I already deployed that application as part of my previous post about our current deployment solution, and I already mentioned its high-level outcome in a previous post about GitOps implementations. Our SRE team is already on its way to production, while our dev teams feel a bit left appart with their requirements. They actually want to be able to deploy some application stacks either locally or remotely while having to make as less configuration changes as possible.
Task definition
- Development
- Write Flux HelmRelease file(s). There are configured examples available from the past PoC.
- Write Kustomize file(s) allowing for HelmRelease resource registration and configuration.
- Operations
- Analyse if some of the above can be automated, i.e. generated automatically.
- Setup FluxCD deployment from Git via Kustomize.
- Use-cases
- Run different scenarios i.e. install, update, delete, back-sync.
- Check how the process may work for both developers and SRE.
Approach
I chose to work on Development and Operations in parallel by writing some Bash scripts (see link to the sources at the end of this post), so that the configuration toil gets lower, the productivity increased and best-practices enforced. Also, I assumed the following conditions:
- File system access to the chart sources (for the example a project/charts folder, in my organisation the clone of a dedicated Git repo called helm-charts) for HelmRelease generation
- URL access from the cluster to the chart package (for the example a ChartMuseum pod, in my organisation a dedicated Artifactory repo)
Setup
To begin with, we'll use our local cluster with Flux Helm Operator, no FluxCD or Git.
$ kubectl create namespace flux
$ kubens flux
$ kubectl apply -f \
https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
$ kubectl create secret generic flux-git-deploy --from-literal=git-key=whatever
$ helm upgrade -i helm-operator fluxcd/helm-operator \
--set helm.versions=v2 \
--set git.ssh.secretName=flux-git-deploy \
--namespace flux
$ kubectl rollout status deployment/helm-operator
Now if we submit a HelmRelease file to the cluster, then it is processed by Flux Helm Operator.
$ kubectl apply -f my_helmrelease.yaml # or delete
$ kubectl get helmreleases
$ kubectl describe helmrelease my_helmrelease
$ helm ls
Project structure
$ tree application-flux-config . └─ [cluster_name] # k8s context, ex. aws_dev
├─ infra
| └─ [namespace] # ex. kube-system | └─ [object name] # ex. aws-node.yaml └── releases └─ [namespace] # ex. app └─ [object name] # ex. portal-component-xy.yaml
The goal is to keep a similar structure, while centralizing and not replicating helmreleases. For the demo, we will use the following project structure.
$ tree project . ├─ base # all generated from source charts │ ├─ [object name] # ex. portal-component-xy.yaml │ ├─ kustomization.yaml ├─ charts # source charts (embedded for the demo only) │ ├─ [chart name] └─ config └─ [cluster_name] # k8s context, ex. aws_dev ├─ infra | └─ [namespace] # ex. kube-system | └─ [object name] # ex. aws-node.yaml └─ releases | └─ [namespace] # ex. app | └─ [stack name] # ex. portal
| | └─ [component patch]# ex. portal-component-xy-patch.yaml | | └─ kustomization.yaml | └─ [namespace patch]
| └─ kustomization.yaml └─ [cluster patch] └─ kustomization.yaml
HelmRelease files
Flux Helm Operator requires one HelmRelease file per managed/automated Helm chart release.
Let us start with HelmRelease generation.
$ ls -d project/charts/* | ./genHelmRel.sh -o project/base/ INFO: Output folder set to project/base/ Please specify a CHART_PATH: INFO: project/charts/chart-a ... INFO: Output folder set to project/base/ INFO: Writing project/base//chart-a-generated.yaml INFO: Parsing project/charts/chart-a/Chart.yaml INFO: Looking for existing configurations to import... WARN: No custom config repo found next to helm-charts, therefore no values will be imported. INFO: project/base//chart-a-generated.yaml written successfully. ...The script automatically generates the HelmRelease files. It takes a single chart path as input parameter, or a list of chart paths if piped to stdin like in the example call above.
$ ls project/base/ chart-a-generated.yaml chart-b-generated.yaml chart-c-generated.yaml
Note that the script also implements some additional custom rules:
- Automatically backup previous yaml files before writing new ones
- Value import from a configuration repository (already used by another solution). The user is warned in case that repository is not found.
- Handling of some known Helm template expressions. The user is warned in case an unknown/unhandled expression is found.
<<TODO: Use external value files instead of values embedded to the HelmRelease. And merge Ansible helm-values spread across multiple directory levels.>>
Base Kustomize file
$ ./genBaseKust.sh project/base INFO: Found 3 HelmRelease(s). INFO: project/base/kustomization.yaml written successfully. apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - chart-a-generated.yaml - chart-b-generated.yaml - chart-c-generated.yaml
Next, Kustomize overlay/patch feature will allow us to add resource objects, and create helmrelease configuration variants.
Overlay Kustomize file
We want to override chart repository (and later ingres domain) configuration at cluster level. In this case we will use a JSONPatch which is very handy when something applies to any helmrelease.$ cat <<EOF >project/config/local/cluster-patch.json
[ {"op": "replace", "path": "/spec/chart/repository", "value": "http://chartmuseum.kube-system:8080"} ] EOFWe may now create a new kind of kustomization.yaml$ ./genOverlayKust.sh project/config/local/ INFO: OVERLAY_DIR set to project/config/local/ INFO: Found 1 resource patch(es). INFO: Found ../../base/kustomization.yaml INFO: project/config/local/kustomization.yaml written successfully. apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base patches: - path: cluster-patch.json target: group: helm.fluxcd.io version: v1 kind: HelmReleaseAs we can see, the script searches for base kustomization.yaml in parent hierarchy, and then "hooks" itself there. Also, our JSON patch has been automatically found and referenced.
Note that JSON Patch is not yet supported by kubectl kustomize. Therefore you have to use the standalone version of the tool to be able to render results. Note that this is also the version integrated by FluxCD.
$ kustomize build project/config/local/ | grep "releasename\|repository"
repository: http://chartmuseum:8080 releasename: chart-a repository: http://chartmuseum:8080 releasename: chart-b repository: http://chartmuseum:8080 releasename: chart-cWe want to deploy our portal into different namespaces, and if necessary, automatically create the target namespace as part of the deployment. We will create a new resource object for that.$ cat > project/config/local/releases/arch/arch-ns.yaml <<- "EOF"
---
apiVersion: v1
kind: Namespace
metadata: name: arch
labels:
name: arch
EOFWhile generating corresponding kustomization, we will take the opportunity to update our HelmResources with this namespace using the -n parameter.
$ ./genOverlayKust.sh -n arch project/config/local/releases/arch INFO: Target namespace set to arch INFO: OVERLAY_DIR set to project/config/local/releases/arch INFO: Found ../../../../base/kustomization.yaml INFO: Found 1 YAML resource(s). INFO: Found 1 JSON patch(es). INFO: project/config/local/releases/arch/kustomization.yaml written successfully. apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: arch bases: - ../../../../base resources: - arch-ns.yaml patches: - path: cluster-patch-inherited.json target: group: helm.fluxcd.io version: v1 kind: HelmRelease $ kustomize build project/config/local/releases/arch | grep namespace | sort -u namespace: arch
We now have a more specific configuration inheriting patch configurations found on the way to the base. Reason for that behaviour are the following limitations of Kustomize:
- Multiple overlays not supported i.e. bases cannot reference another kustomization.yaml containing bases (error: cycle detected) --> an overlay must directly point to a base of HelmRelease(s)
- Refering resource or patch from parent structure not supported (security: file is not in or below) --> parent resources and patches to inherit via file copy
- Symbolic links not supported (error: file not found) --> cannot help working around above limitations
Finally the leaf of our project structure is a custom stack with custom configuration, that is to say in our case the portal application to be deploy on a local cluster.
<<TODO: Here we want to select only 2 HelmRelease from our base of 3. Since we cannot patch-remove a complete object, is it the right way to apply a HelmRelease with a sort of flux "deactivation flag" for Flux Helm Operator to no deploy the chart, although FluxCD would deploy all HelmReleases anyway? >>
Plus we want to override the configuration values of one of them, let's say the replicaCount to keep it simple. In this case it is easier to write a YAML rather than a JSON patch for the HelmRelease, since
<<TODO: Here we want to select only 2 HelmRelease from our base of 3. Since we cannot patch-remove a complete object, is it the right way to apply a HelmRelease with a sort of flux "deactivation flag" for Flux Helm Operator to no deploy the chart, although FluxCD would deploy all HelmReleases anyway? >>
Plus we want to override the configuration values of one of them, let's say the replicaCount to keep it simple. In this case it is easier to write a YAML rather than a JSON patch for the HelmRelease, since
- we may select the chart to modify from the YAML vs. for JSON it has to be done as part of kustomization.yaml
- there is no values node defined from the base object and therefore it would need to be created first for the path to be valid.
$ cat > project/config/local/releases/arch/portal/chart-c-replica-patch.yaml <<- "EOF"
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: chart-c
spec:
values:
replicaCount: 2 EOF
Finally we may want to deploy different stacks on the same namespace, so it is recommended to specify a prefix for our HelmRelease(s), using our script -p flag.
$ ./genOverlayKust.sh -p true project/config/local/releases/arch/portal/ INFO: Name prefix activated INFO: OVERLAY_DIR set to project/config/local/releases/arch/portal/ INFO: Removing previously inherited patches rm: cannot remove '/mnt/HDD2TB/Repository/tncad/k8s-app-cpd/026-fluxhelm-kustomize/project/config/local/releases/arch/portal/*-inherited.*': No such file or directory INFO: Found ../../../../../base/kustomization.yaml INFO: Found 1 YAML resource(s). INFO: Found 2 YAML or JSON patch(es). INFO: project/config/local/releases/arch/portal/kustomization.yaml written successfully. apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: arch namePrefix: portal- bases: - ../../../../../base resources: - arch-ns-inherited.yaml patches: -path: chart-c-replica-patch.yaml - path: cluster-patch-inherited.json target: group: helm.fluxcd.io version: v1 kind: HelmRelease
$ kustomize build project/config/local/releases/arch/portal | grep "portal-\|replicaCount" name: portal-arch name: portal-chart-a name: portal-chart-b name: portal-chart-c replicaCount: 2
As we can see, resources patches and configurations are inherited from above. A namePrefix is aplied to all Kustomize resources.
Note about namePrefix and namespace
Be aware that Flux Helm Operator will apply the release naming convention
<namespace>-[<targetNamespace>-]<name>, where <name> includes Kustomize namePrefix, namespace is taken from HelmRelease metadata.namespace and <targetNamespace> is taken from HelmRelease spec.targetNamespace.
It might result in huge Kubernetes resource names, while those shouldn't exceed 63 characters in the total. So the recommendation is to not over-use Kustomize namePrefix, especially if you already have long resource names.
Manual depoyment
Now we may easily submit to, update or withdraw our confguration from the cluster.
$ kubens default
$ kubectl apply -k project/config/local/releases/arch/portal # or delete
$ kubectl get helmreleases
In a later post, we'll take a look at how FluxCD is able to automatically handle Kustomize deployments.
Take-away
I enjoy the topic I am currently working on and spent quite some time writing scripts, because I assumed this project had the potential to grow and get attention. Unfortunately, I have to admit that the complexity of using Flux, Helm and Kustomize together is much higher that expected, mainly because of some limitations of Kustomize. We had to replicate objects (base, patches) that we actually didn't want to relicate, even if the replication happens automatically via scripts. Still I am looking forward to complete this study with the deployment automation part.
Comments
Post a Comment