Flux Helm Operator supports the automated deployment of Helm charts to Kubernetes while maintaining compliance with the GitOps operational model.
The daily mood
I watched a bit the DockerCon 2020. Although most talks are addressed to a C-Level or beginner audience, I found some interesting pointers around application packaging, integration and delivery. Among others, I will definitely have a look at KubeStack which seems to be able to solve some problems around GitOps.
My today's goal is to have a look at Flux for automating Helm chart deployments.
Source: Revelry.co / Bitnami
Same as in this previous post about Flux without Helm.
Installation steps
- Delete previous objects
kubectl delete namespace flux kubectl delete clusterrole flux
- Add new Helm repository
helm repo add fluxcd https://charts.fluxcd.io
- Apply HelmRelease CRD to the cluster
kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
- Create Flux namespace
kubectl create namespace flux
- Deploy Flux and Helm operators
export GHUSER="<your_github_username>" export GHREPO="<your_github_reponame>" # install flux chart from repo and wait for start helm upgrade -i flux fluxcd/flux \ --set git.url=git@github.com:$GHUSER/$GHREPO \ --namespace flux kubectl -n flux rollout status deployment/flux # install helm-operator chart from repo and wait for start helm upgrade -i helm-operator fluxcd/helm-operator \ --set helm.versions=v2 \ --set git.ssh.secretName=flux-git-deploy \ --namespace flux kubectl -n flux rollout status deployment/helm-operator # generate ssh key fluxctl identity --k8s-fwd-ns flux
- Manually add SSH-deployment-key under your GitHub-Repo-Settings and grant write access
Note: git.ssh.secretName is just the name of the kubernetes secret created in namespace flux and holding the SSH private key (see also Helm-operator configuration)
We'll use the standard Helm chart generated by default via helm create command, as explained in this previous post about Helm. Now let us download the GitHub repository and create the structure as required by Flux.
git clone https://www.github.com/${GHUSER}/${GHREPO} cd $GHREPO
mkdir charts && cd charts
helm create nginx-chart
cd ..
mkdir releases && cd releases
cat > nginx-chart.yaml <<- "EOF"
apiVersion: helm.fluxcd.io/v1 kind: HelmRelease metadata: name: nginx-application namespace: default annotations: fluxcd.io/automated: "false" # intentional for the demo spec: releaseName: nginx-release chart: git: ssh://git@github.com/jclarysse/flux-get-started ref: master path: charts/nginx-chart EOF cd ..
Execution steps
Let us push those changes to the repository
git add * && git commit -m "add nginx helm release to flux" && git pushWe now can see an object of kind HelmRelease appearing in Flux workloads. Note the status "depoyed" and empty policy.
fluxctl list-workloads --k8s-fwd-ns flux
WORKLOAD CONTAINER IMAGE RELEASE POLICY default:helmrelease/nginx-application deployed
Now if you set the automation flag to true and commit your change in Git, then the Flux automation policy is set to "automated" and a new object of kind deployment is added (with empty policy)
WORKLOAD CONTAINER IMAGE RELEASE POLICY default:deployment/nginx-release-nginx-chart nginx-chart nginx:stable ready default:helmrelease/nginx-application deployed automatedYou can remove the release by de-aumotating it in a similar way as of for non-Helm workloads
fluxctl deautomate --k8s-fwd-ns flux --workload=default:helmrelease/nginx-application
This will replicate to Git (as expected).
Flux Helm Operator Diagram
References
Take-away
Flux is able to manage simple Helm charts even if their sources and binaries are stored in a different repository as the flux release. Unfortunately I also found out that despite regular polling on flux side it can take quite a long time before objects are automatically re-created, and manually delete things on the cluster may lead to unstable state. Since fluxctl talks to the flux operator only, there might be some tweaking possible to optimize validation rules and trigger frequency.
What I understood from the past PoC ist that Flux Helm operator was evaluated without our Helm custom plugin (and Ansible). It came out that a solution purely based on Flux was too static as a replacement solution for our use-case. Indeed, it was not possible to achieve the same level of value inheritance (depending on the environment) and host naming (depending om the team namespace).
Looking at the technical documentation of Flux Helm Operator, it is possible to setup our plugin (incl. Ansible) and access credentials (to our private repo) on the Helm operator pod. Not sure how reasonable it would be.
Comments
Post a Comment