This article dresses a naive picture and history of Continuous Delivery (CD), while introducing GitOps as the best operational model for Kubernetes.
The daily mood
My manager introduced me the different tracks we are working on. Fortunately I am free to organise my time and directions for ramping-up. For now I have enough backlog with Kubernetes, Helm and Continuous Delivery. Still I hope that he can support my integration within the team, its initiatives and accountability.
CD: A retrospective
Software delivery has always been a tedious and time-consuming process. In the old times developers were sending an application archive per e-mail to the administrator of an application server, along with database creation statements and application parameters. Developers were potentially using distributed version control (ex. SVN, Git), build and dependency management tools (ex. Maven, Gradle). But they had no idea of how the production environment is configured. Admins were potentially using procedural scripts for configuring and provisioning the infrastructure (ex. Chef, Ansible). But they had no idea about applications and their actual resource requirements. Pb #1.
Developers and administrators tried to find ways of standardizing software development and delivery processes (DevOps). They agreed to share deliveries and their metadata (ex. version, status...) in a central place like a Web storage (ex. NAS, S3) or a registry service (ex. Nexus, Artifactory). CI pipelines were introduced to automate build and tests. They could run asynchronously, triggered by an external event like a repository Push or a periodic/scheduled task (ex. "nightly build"). On success and optional release approval, CD pipelines were able to automatically deploy binaries to the next stage. Provided that the automation server (ex. Jenkins, Spinnaker) had write access to target environment(s). Pb #2.
In a world of containers and microservices, the number of objects and versions involved in the delivery of a single application multiplied in average by 10, while infrastructure became self-describing, highly resilient and customisable via declarative syntax (ex. Kubernetes manifest, but a spreasheet maintained by Google actually collected over 60 different tool that can be used for managing such configurations). Infrastructure-as-Code ("IaC") became even more crutial and potentially heavier than the delivery itself, which no longer has to be pushed anymore since the environment is actually able to pull it by itself. Still Infra-code needs to be manipulated each time image versions and their parameters become outdated, resulting in lots of direct access, potential new errors, troubleshooting efforts and delays. Pb #3.
What is GitOps
GitOps is an operational model initiated and promoted by Weaveworks a few years ago for solving the problems mentioned above. Unlike traditional CIOps which is considered to be test-centric, GitOps methodology is release-centric. In fact GitOps is not really replacing CIOps but rather adapting CD part of the software delivery lifecycle to the Cloud native world.
At the origin of a delivery, a developer is usually working on a dedicated branch, in case of a bug in production for example, he rebuilds at the same revision as the production build to fix it, a practice known as "cherry-picking". Developer change is then submited for approval via a Pull request (PR) along with a comment, a process step available out-of-the-box at well-known Git providers (ex. GitHub, Gitlab). The reviewer then rejects or approves the request until the revision change is merged to the mainline (ex. SVN trunk or Git master). A similar process applies for release candidate, network and general availability.
At the core of GitOps, PR-merge events are used for triggering further automations pipelines, themselves potentially creating other PR, a practice known as Operations by pull requests.
A Git repository is used for versioning Infra-code and is considered as the Single-version-of-the-truth, which means that it not only provides any info required for an autonomous deployement, but also exactly reflects what is deployed. And yes, there are subtleties for operating multiple versions of same application in production.
Moreover target environments are never accessed directly but through Operator only. With this pattern, 3 basic steps of the OODA Loop (originally a military strategy now applied for decision making processes) can be automated: Analyse, Observe, and Act. Those steps are also commonly known as the reconciliation loop. At its core, a diff between current state (via Kubernetes resources ability to self-describe) and desired state (via infracode stored in Git).
Source: Weave release cycle model
In our case, a controller service lives (ideally as a native instance) inside the target environment and replaces a traditional CD pipeline. On one hand, it is able to dedect the presence of new artifacts/images and automatically reflect their version into the Infra-code. On the other hand, it makes sure that the target environment is up-to-date with Infra-code. In case an engineer needs to change a configuration, he might be allowed to do so via a Push to the Infra-code repo. And yes, it might break the delivery process. But production isn't that much at risk when deploying accross multiple stages.
This radical change in direction eventually addresses two major concerns:
- Time-to-market: Developers can be on-boarded easier, features more often promoted to QA (velocity).
- Security: No more direct access to the target environment(s) for deployment (inbound communication and standardized operations only).
Conclusion
The GitOps methodology is especially helpful for cloud-native applications to be continuously delivered and released the light way. If you are not yet targeting Kubernetes and fined-grained deliveries, then you still might prefer a traditional CD approach.
Comments
Post a Comment