015: Hybrid Deployment with Envoy-Proxy Skip to main content

015: Hybrid Deployment with Envoy-Proxy

This article discusses different proxy types and introduces Envoy, a high performance distributed proxy designed for Microservices architectures.

The daily mood

In the past I intensively worked on our Platform-as-a-Service (PaaS) as a user. Although I was interested in it, I didn't have access to architecture details, developer repositories, operational processes and environments. Now that I have all this, it feels a bit stange that I am not yet able to bring them to life, in a way that I could immediately run and use our solutions as needed... Not that easy! Indeed, an application that basically looks like a monolith web server and database on the frontend side, actually appears to be a big and resource-intensive collection of Microservices. Those need to be configured, secured and provisioned individually so that the application can be operated in Kubernetes. 

This finding "positively" explains how we can scale accross a large number of tenants. It also "negatively" explains how much effort it means for my organisation to stay innovative (ex. rolling-out new features) and reliable (ex. break as less as possible) at the same time. In terms of architecture and governance, it is a great opportunity for my team to analyse and improve. As a matter of fact, I must admit that my focus is now completely shifting. "Run and use" is not any more the priority for my job, but "analyse and improve". Because running that kind of application locally is so expensive in terms of knowledge, effort and hardware, I wanted to figure out what it takes for our developers to work on an app.

Use-Case

Assuming you are the developer of a new (micro-)service to integrate into a larger application. Your service builds, unit-tests, packages as a container and executes successfully in the context of integration tests (ex. with mock interface). Now you want to bring it to the next level of operations, as required by QA system tests and SRE promotion to PRE-PROD. Here are the options for running some parts of the application locally, and some other parts on a shared infrastructure (ex. a DEV-cluster):
  1. Using a utility like kubefwd (a wrapper to kubectl port-forward) or Telepresence (a two-way network proxy for Kubernetes), you can call or troubleshoot remote services as if they were running locally, provided that there are only a few host:port to access.
  2. Implementing a cross-cluster deployment using an edge proxy for routing all service communications. This component should be standard so that it works for any application and can be shared accross teams.
Hybrid-stack

We integrated second option to our custom Helm plugin, so that the deployment process look like follows:
  1. helm <plugin_name> init                          # setup remote host
  2. kubectx <remote_cluster>                         # awsdev
  3. helm <plugin_name> deploy <some_full_stack>      # ex. platform
  4. kubectx <local_cluster>                          # microk8s
  5. helm <plugin_name> deploy <some_hybrid_stack>    # ex. webapp
We met the following challenges:
  • The number of different possible stacks is even larger than before, and it was likely to not find the stack required for your specific need.
  • We need to adjust DNS conventions for accessing remote services, which actually even impacted how local services are talking to each other. 
  • Since we use traefik reverse HTTP proxy as our default Ingress controller, we couldn't route certain communications in v1 (no TCP support) like for ex. when using Kafka.
The latest 2 challenges were solved using a common chart "hybrid-services-proxy" that creates an Envoy edge proxy for automatically redirecting internal services call to distant host.

Reminder on Proxy types

Like me, you might sometimes be confused about the naming and use of different proxy components.

proxy server is a hardware appliance or software application that acts as a middleware / intermediary for client requests to reach server resources. In a microservices world, proxies are part of application layer 7 (ex. HTTP, gRCP) on top of transport layer 4 (ex. TCP/IP) of the OSI model. There are 2 main kinds of proxies:
  • Forward / Egress proxies sit in client network and route communications externally ex. DNS, SOCKS, Internet, Transparent / Caching, Encryption.
  • Reverse / Ingress proxies sit in server network and route communications internally ex. Load balancer, Authentication, Decryption.
  • Edge / Service proxies may sit in a Demilitarized Zone (DMZ) or behind another reverse proxy for improving network security and performance (ex. API Gateway pattern), service-to-service communication and observability (ex. Sidecar pattern). 
What is Envoy

Originally built at Lyft and later given to the CNCF, Envoy is a high performance C++ distributed proxy designed for single services and applications. Envoy became very popular around the cloud native concept of “universal" Data plane designed for management and Observability (ex. health check, metric collection) of ephemeral Microservice and elastic Service mesh architectures.

Why Envoy

Like traefik but unlike traditional L7 load-balancers (LB) like NGINX and HAProxy, Envoy has a very small footprint and abstracts the network by providing a complete set of features in a platform-agnostic manner. An interesting LB benchmark by Loggly Solarwinds shows that Envoy outperforms other solutions especially at throughput / requests per second (RPS) and HTTPS latency thanks to its optimized CPU threading model. Antoher benchmark by Ambassador explains that RPS is not as much relevant in a Kubernetes context since natively supported horizontal scaling makes proxies scale linearly, but instead the latency introduced through dynamically scaling up and down, which is also better with Envoy. Ambassador also gives further decison criteria for choosing Envoy over NGINX for its API Gateway. While NGINX started as a Web server and is limited in the open-source version in favour of the commercial offer NGINX Plus, Envoy proxy was developed from the ground up specially for Microservices, and is fully open-source. Envoy proxy is also deployed in production at Google, Apple and Salesforce.

Getting started with Envoy proxy

I found this Kadakoda project very helpfull as an overview of Envoy proxy settings and capabilities. Like traefik and NGINX, it can be easily started as a container and individually configured via "hitless" YAML-file (update without restart) for each deployment. It comes with support for HTTP/2, remote service discovery, advanced load balancing patterns such as circuit breaker and traffic shaping.

See also


Comments