Kubernetes is a Container Orchestrator able to run in distributed mode. It has become the default platform for operating scalable applications.
The daily mood
My primary focus is to become familiar with, and improve our current Continuous Delivery & Deployment (CD) process, a software development and operation engineering discipline that requires high level of automation in order to improve the pace of innovation and quality of production. Our organisation already uses Kubernetes intensively. In terms of CD, we are slowly transiting from a traditional operational model to the GitOps philosophy.
In my previous post I have setup a local Kuberntes cluster and some tooling. Now we are going to talk about Kubernetes concepts and create an example application from scratch. What is Kubernetes
Kubernetes (K8s) is a distributed system written in Go and used for container orchestration, i.e. for configuration, deployment and operation of containerized applications at scale. K8s has been originally developped by Google under project name Borg, then open-sourced and donated to the Cloud Native Foundation (CNCF). In a Nutshell, Kubernetes has a Master-Slave architecture like follows.
The master or "control pane" holds:
- a controller or "kube-controller-manager"
- a scheduler or "kube-schedule"
- an API server or "kube-apiserver"
- a key-value store (etcd)
The slave or "node" holds:
- an API client or "kubelet"
- a proxy or "kube-proxy"
- a container runtime (containerd)
Kubernetes runs containers in a Pod, a logical entity or resource of the internal network having a unique IP address. A Pod can contain one or more containers but usually one. In fact, an application with a webserver and a database may fit in a Pod but does not support individual change and scability. Instead, consider a Pod as an instance of an application component, ex. 1 database node.
Since there are tons of references and articles on K8s, i'll just quote 2 interesting flavors for beginners:
Well known alternatives to K8s are:
- Docker Swarm by Docker
- Apache Mesos by Mesosphere
- Nomad by Hashicorp
Since K8s has reached the critical mass, those providers have specialized in edge-cases and integrated K8s in their portfolio. Major infrastructure providers also offer certified Kubernetes distributions:
- Unmanaged
- Managed
K8s client supports both imperative and declarative styles to crud for cluster resources. You may use one or the other, nevertheless the declarative style is recognized as the best-practice as it supports project structure and revision control.
Imperative style
$ kubectl create deployment nginx --image nginx $ kubectl rollout status deployment nginx
# direct pod access $ sensible-browser http://`kubectl describe pod $(kubectl get pods | grep nginx | cut -d' ' -f1) | grep IP: | head -1 | awk '{print $2}'` $ kubectl delete deployment nginxDeclarative style
kubectl apply -f deployment.yaml
Going further
Beside deployments there a few more kinds of resource objects available, and here is an non-exhaustive list of commonly used ones:
- Entities
- namespace
- deployment, replicationcontroller
- pod, service
- job, cronjob
- Configurations
- configmap
- secret
- Security
- serviceaccount
- role
- clusterrole, clusterrolebinding
Note that a manifest file may actually consist in multiple resource declarations, for example both a deployment and a configmap. Let's extend to deployment-1.yaml as follow:
... volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-volume
subPath: index
volumes:
- name: nginx-volume
configMap:
name: volume-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: volume-configmap
data:
index: |-
<html>
<h1>Hello from ConfigMap</h1>
</html>
This configuration overwrites the user welcome message of Nginx at volume creation. Note that you may now want to delete your resources the declarative way at once, instead of imperatively one by one.
$ kubectl delete -f deployment-1.yaml
In deployment-2.yaml, we set nginx standard environment variables from a container configmap:
... envFrom: - configMapRef: name: container-configmap--- apiVersion: v1 kind: ConfigMap metadata: name: container-configmap
data: map-hash-bucket-size: "128" ssl-protocols: SSLv2
Now let us extend to deployment-3.yaml via a simple service:
--- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: nginx
With this, we have created a load balancer in addition to the pods for dispatching endpoint requests to one of the available web server instances (i.e. application pods). We can easily observe this behaviour using 3 different terminal as follow:
# Terminal 1 $ watch -n 1 kubectl logs `kubectl get pods | grep nginx | head -1 | cut -d' ' -f1` # Terminal 2 $ watch -n 1 kubectl logs `kubectl get pods | grep nginx | tail -1 | cut -d' ' -f1` # Terminal 3 $ watch -n 3 curl -X GET http://`kubectl get services | grep nginx | awk '{print $3}'`
Now that we are familiar with Kubernetes concepts, as well as the Twelve factors app principle of separation of parameters from configuration, let us assume that we'll have to manage multiple applications, each one having huge and complex manifest, some depending on each other etc. In order to handle those challenges, we'll need a well-defined project structure and some tooling for reducing imperative commands to the bare minimum. This is exactly what Kubernetes configuration and/or packaging tools like Kustomize or Helm are good at.
Comments
Post a Comment