032: Service Mesh with Istio Skip to main content

032: Service Mesh with Istio

A Service Mesh is a managed network of Microservices. Its model is probably one of the most powerful enablers for a dynamic Cloud Native architecture.


The daily mood

After 2 months part-time on-ramping, I am now set at full-time architect (as originally agreed). I just had the opportunity to present my work on "Flux featuring with kustomize" (see my previous post) to the team, and didn't hear any negative feedback although I had not even prepared a Powerpoint. It's always good to get an assessment from peers. 

I may not have so much time for this blog in the future, but I definitely expect to follow-up on a number of topics that I feel right with. I am now starting a new series of posts around Cloud Native, or the high-level abstraction of Kubernetes architectures through components helping at "solving the boring but difficult".


What is Cloud Native

The term was born when Kubernetes v1.0 was officially annouced at CNCF 2015 conference. At its core, the fact that new applications are not any more based on traditionnal infrastructure but Linux containers, Microservices managed ressources of low level (IaaS/FaaS/CaaS). Since then, a growing set of open-source projects and specifications sharing that direction were incubated by the CNCF. The foundation also maintains a trailmap and landscape for any organisation willing to become Cloud Native or update.


A history of Services

15 years ago, a Service Oriented Architecture (SOA) was the result of de-coupling infrastructure from business services via a monolithic Middleware that people either called an Enterprise Service Bus (ESB) or a Message Oriented Middleware (MOM) depending if the main application focus was on (web-)services or events. 10 years ago, everybody agreed that the future would belong to distributed systems and Microservices. With this, applications would split into smaller pieces that are easier to replace or replicate. Those pieces would own their own storage and expose their informations via (HTTP/RESTful-kind of) API only. They would also require a common approach for interacting with each other (ex. Actor model) and own operational capabilities like management, security and governance (ex. Spring Boot). 5 years ago, microservices started to run inside Docker containers and integrate orchestration platforms (ex. Apache MesosKubernetes) by default.


What is a Service Mesh

A Service Mesh is basically a model for organising networks of microservices, in order to better supports all kind of service interaction and operability. A common approach is to use sidecar proxies (i.e. 1 per microservice or 1 per pod) so that distributed services may delegate their management, security and governance back to a central instance.

The Service Mesh model is composed of 3 main architecture layers:
  • Data plane refers to all components holding and forwarding trafic from one interface to another.
  • Control plane refers to all components determining service routes and access.
  • Management plane refers to all components responsible for administrating and supervising.
Note: For simpliciy, Management plane often merges into the Contol plane. There is also a new standard for defining Service Mesh in Kubernetes that is called the Service Mesh Interface (SMI).


What is Istio

Istio is a CNCF project that implements the Service-Mesh model. Under the hood, it consists in a number of different components:
  • Data plane
    • Envoy sidecar proxies deployed along with service pods through auto/manual injection
  • Control plane
    • Pilot for servcice discovery, load-balancing and routing (ex. from ingress, to egress)
    • Citadel for service-to-service authentication and encryption
    • Mixer for policy management and telemetry
    • Chiron for DNS certificates
    • Galley for service validation
    • Jaeger or Zipkin for tracing
  • Management plane
    • Istiod for server management in both single or multi-cluster modes
    • Prometheus for observability
    • Kiali for dashbaord
Possible alternatives to Istio are CNCF Linkerd, CNCF Kuma and HashiCorp Consul.


Setup client

Here I am picking v1.2.2 as per the server version shipped with my cluster (microk8s.istioctl version).
$ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.2.2 sh -
$ mv istio-*/bin/istioctl ~/.local/bin/

Setup cluster

There is a comprehensive installation guide available here. On microk8s, I'll just hit: 
$ microk8s.enable istio
In case the cluster is not a microk8s, you need to proceed like follow:
# setup last client version
$ curl -L https://istio.io/downloadIstio | sh -
$ mv istio-*/bin/istioctl ~/.local/bin/
# minimal server setup (please check-out the ref for production)
$ istioctl install --set profile=demo
✔ Istio core installed 
✔ Istiod installed 
✔ Egress gateways installed 
✔ Ingress gateways installed 
✔ Addons installed 
✔ Installation complete

# wait for server to show up
$ kubectl -n istio-system rollout status deployment/istio-ingressgateway

# double-check
$ istioctl version
client version: 1.6.5
control plane version: 1.6.5
data plane version: 1.6.5 (3 proxies)

Basic application
The tcp-echo application consists in a single minimal service. We'll start deploying it using a standard Kubernetes service manifest, and then extend it to a Istio use-case.
$ kubectl create namespace istio-test namespace/istio-test created $ kubens istio-test Active namespace is "istio-test". $ kubectl apply -f istio-*/samples/tcp-echo/tcp-echo-services.yaml service/tcp-echo created deployment.apps/tcp-echo-v1 created deployment.apps/tcp-echo-v2 created
We now have a service endpoint listening to port 9000 and equally balancing load between 2 pods.
$ kubectl get pods,services,deployments NAME READY STATUS RESTARTS AGE pod/tcp-echo-v1-7cffc756d4-9sdqj 2/2 Running 0 96s pod/tcp-echo-v2-6d5d6ddb9-m7gcr 2/2 Running 0 96s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/tcp-echo ClusterIP 10.152.183.116 <none> 9000/TCP,9001/TCP 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tcp-echo-v1 1/1 1 1 96s deployment.apps/tcp-echo-v2 1/1 1 1 96s
We are ready to test the service
$ for i in {1..5}; do sh -c 'echo "is my version" | nc -w1 10.152.183.116 9000'; i=i+1; done two is my version one is my version two is my version two is my version one is my version
Well, that works great. But we can't operate the service without a concept for exposing it externally, controlling new version rollout (ex. in the case of canary release), handling observability and auto-scaling (ex. in the case of trafic load). Since we don't want to change the code of our application for that purpose, we will rely on Istio. Let us re-deploy the same manifest:
$ kubectl apply -f <(istioctl kube-inject -f istio-*/samples/tcp-echo/tcp-echo-services.yaml) service/tcp-echo unchanged deployment.apps/tcp-echo-v1 configured deployment.apps/tcp-echo-v2 configured
With this, we manually injected a sidecar "istio-proxy" into each one of the pods hosting our microservice "tcp-echo".
$ kubectl get pod $(kubectl get pods | tail -1 | cut -d' ' -f1) \
    -o jsonpath='{.spec.containers[*].name}*'
tcp-echo istio-proxy*
The sidecar proxy injection can also happen automatically by simply activating it at namespace level:
$ kubectl label namespace istio-test istio-injection=enabled
As of now, all trafic is managed by the proxy network aka. "mesh".
We can re-test the application and monitor the service using the Istio dashboard:
$ istioctl dashboard kiali
We'll add some Istio configuration objects:
$ kubectl apply -f istio-*/samples/tcp-echo/tcp-echo-all-v1.yaml gateway.networking.istio.io/tcp-echo-gateway created destinationrule.networking.istio.io/tcp-echo-destination created virtualservice.networking.istio.io/tcp-echo created
With this, our service is now accessible via Istio central gateway:
$ export ISTIO_GW_HOST=$(kubectl -n istio-system describe svc istio-ingressgateway | grep IP: | awk '{print $2}') $ echo "is my version" | nc -w1 $ISTIO_GW_HOST 31400 one is my version
To finish with, we will configure our "virtual service" for routing 80% of the trafic to v1 and 20% to v2:
$ kubectl apply -f istio-*/samples/tcp-echo/tcp-echo-20-v2.yaml virtualservice.networking.istio.io/tcp-echo configured $ for i in {1..10}; do sh -c 'echo "is my version" | nc -w1 $ISTIO_GW_HOST 31400'; i=i+1; done
two is my version one is my version two is my version one is my version one is my version one is my version one is my version one is my version one is my version one is my version


Take away

When designing modular applications, operability requirements are certainly a big challenge for Microservices developers, pain for product owners and overhead for sysadmins. The concept of Service Mesh is to delegate service operability to a managed and configured network of sidecar proxies. In the context of Cloud Native, Istio automatically takes care of it while services are being deployed into a Kubernetes namespace.


References

Comments