Knative is a serverless framework for Service-Mesh and Event-Mesh architectures that make it easier to run elastic applications in Kubernetes.
The daily mood
I am just starting to realise how much my work life has changed with my "new" job. No more technical marketing, only questionable facts. No more Business trips, only Home-office. No more telephone calls, only Slack chats. No more productivity tools like Trello and Todoist, only "old-school" Jira projects.
There are also things that didn't change, like daily concerns about wether I am doing the right thing at the right time in order to perform as an employee, while protecting my work-life-balance. They are actually days in and days out, which people might count so or so...
Provided that my organisation is currently pivoting its business and technology, automation is the key. Today is our 2nd post around Cloud native and we are looking at Serverless in the context of Microservices and Kubernetes, or the concept of making services operating more autonomously through dynamic resource allocation.
Knative is a CNCF project created originally by Google and Pivotal, with contributions from over 50 different companies. It mainly provides high-level architecture pieces for easily building and running Serverless applications in Kubernetes. It actually consists in two primary abstraction components: Serving and Eventing.
- Serving is an umbrella to Service-Mesh for managing service-based application deployment and operability. Beside the support and requirement of a Service Mesh (see my previous post about Istio) to be deployed and activated on the cluster, recent versions offer the option to alternatively operate atop of an API Gateway (ex. Gloo, Kong, Ambassador). This works via dedicated bridges to your underlying networking layer of choice.
- Eventing is an umbrella to Event-Mesh for managing loosely-coupled producers and consumers. It integrates with a number of different event sources. Main use-cases are Publish (fire-and-forget), Subscribe (channels), eventually Event Stream Processing (ESP) or Complex Event Processing (CEP). It is consistent with the Cloud Events specification developed by the CNCF serverless working group.
Further Knative components used to be shipped and maintained:
- Build supported container build from source (CI pipeline) and was deprecated in favor of Tekton.
- Observability supported monitoring and was deprecated in favor of Prometheus.
There are a number of vendor distributions of Knative, like for ex. Google Cloud Run and RedHat Openshift Serverless.
Knative is often subject to confusion with the following:
- Kubernetes native paradigma (read the article Why Kubernetes native instead of Cloud native)
- Kubernetes-as-a-Service offers
- Serverless enablers like Function-as-a-Service (FaaS)
Cluster setup
$ microk8s.enable knative # also enables dns and istio
But my microk8s is a v1.17, which comes with Knative v0.9.0 and older components that need to be switched-off in order to spare hardware resources.
$ kubectl delete namespace knative-monitoring
In case the cluster is not a microk8s or you want to install latest Kantive v0.16.0, then you need to proceed like follow:
# serving $ kubectl apply --filename \ https://github.com/knative/serving/releases/download/v0.16.0/serving-crds.yaml $ kubectl apply --filename \ https://github.com/knative/serving/releases/download/v0.16.0/serving-core.yaml $ kubectl apply --filename \ https://github.com/knative/net-istio/releases/download/v0.16.0/release.yaml # eventing $ kubectl apply --selector knative.dev/crd-install=true --filename \ https://github.com/knative/eventing/releases/download/v0.16.0/eventing.yaml $ kubectl apply --filename \ https://github.com/knative/eventing/releases/download/v0.16.0/eventing.yaml $ kubectl apply --filename \ https://github.com/knative/eventing/releases/download/v0.16.0/in-memory-channel.yaml $ kubectl apply --filename \ https://github.com/knative/eventing/releases/download/v0.16.0/channel-broker.yaml
Knative client
Kn ist not a strong requirement for using Knative but it nicely abstracts some low-level kubectl commands required for the serving use-case.
# latest version checkout $ wget https://storage.googleapis.com/knative-nightly/client/latest/kn-linux-amd64 # or pick the version corresponding to the cluster (in my case 0.9.0) $ wget https://github.com/knative/client/releases/download/v0.9.0/kn-linux-amd64 # finally $ chmod +x kn-linux-amd64 $ mv ./kn-linux-amd64 ~/.local/bin/kn $ kn version Version: v0.9.0 Build Date: 2019-10-29 19:00:19 Git Revision: 4ab869a Supported APIs: - serving.knative.dev/v1alpha1 (knative-serving v0.9.0)
Note that knctl is an older one with actually more capabilities like installing server part, but which finally merged and is therefore not maintained any more.
Serving test
I'm creating a new namespace dedicated to Knative applications
$ kubectl create namespace knative-test $ kubectl label namespace knative-test istio-injection=enabled $ kubens knative-test
We'll take the helloworld-go application from the serving sample applications which is basically a "greeting" service. Instead of re-building it from scratch as per the reference tutorial, we'll just pull the corresponding image from GCP container regsitry.
$ kn service create helloworld-go \
--namespace knative-test \
--image gcr.io/knative-samples/helloworld-go \
--env TARGET=tncad Creating service 'helloworld-go' in namespace 'knative-test': 0.031s The Route is still working to reflect the latest desired specification. 0.127s Configuration "helloworld-go" is waiting for a Revision to become ready. 109.362s ... 109.470s Ingress has not yet been reconciled. 110.821s Ready to serve. Service 'helloworld-go' created with latest revision 'helloworld-go-yhnfz-1' and URL: http://helloworld-go.knative-test.example.com
We can now hit the service through Istio gateway:
$ export ISTIO_GW_HOST=$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}') $ curl -H "Host: helloworld-go.knative-test.example.com" http://$ISTIO_GW_HOST Hello tncad!
If we generate some load and monitor the pods on knative-test namespace, we can see them automatically scaling according to incoming traffic: scaling down to 0 when no request are coming, otherwise scaling up to a maximal number of pods defined by configuration at service or global level.
$ for i in {1..5}; do sh -c 'curl -s \ -H "Host: helloworld-go.knative-test.example.com" \ http://$ISTIO_GW_HOST \ > curl.log'; i=i+1; done & $ watch -n5 kubectl get pods
Since we (optionally) activated the automatic sidecar proxy injection at namespace level, we can of course monitor using the Isitio Dashboard as well:
$ istioctl dashboard kiali
What happened?
Using Kn, we just created a Knative custom object of the exact following kind:
$ cat <<EOF | kubectl apply -f - apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: helloworld-go namespace: knative-test spec: template: spec: containers: - image: gcr.io/knative-samples/helloworld-go env: - name: TARGET value: "tncad" EOFIn the background, a number of resources (custom objects) were automatically added:
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/helloworld-go-yfwdk-1-deployment-84f96595fd-wtzlv 3/3 Running 0 14s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/helloworld-go ExternalName <none> cluster-local-gateway.istio-system.svc.cluster.local <none> 6m11s service/helloworld-go-yfwdk-1 ClusterIP 10.152.183.118 <none> 80/TCP 6m17s service/helloworld-go-yfwdk-1-8nt85 ClusterIP 10.152.183.156 <none> 80/TCP,8022/TCP 6m17s service/helloworld-go-yfwdk-1-metrics ClusterIP 10.152.183.198 <none> 9090/TCP,9091/TCP 6m17s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/helloworld-go-yfwdk-1-deployment 1/1 1 1 6m17s NAME DESIRED CURRENT READY AGE replicaset.apps/helloworld-go-yfwdk-1-deployment-84f96595fd 1 1 1 6m17s NAME URL LATESTCREATED LATESTREADY READY REASON service.serving.knative.dev/helloworld-go http://helloworld-go.knative-test.example.com helloworld-go-yfwdk-1 helloworld-go-yfwdk-1 True NAME URL READY REASON route.serving.knative.dev/helloworld-go http://helloworld-go.knative-test.example.com True NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON revision.serving.knative.dev/helloworld-go-yfwdk-1 helloworld-go helloworld-go-yfwdk-1 1 True NAME LATESTCREATED LATESTREADY READY REASON configuration.serving.knative.dev/helloworld-go helloworld-go-yfwdk-1 helloworld-go-yfwdk-1 TrueAll Knative Serving custom resource types were effectively instantiated:
$ kubectl get crds | grep serving.knative | cut -d' ' -f1 configurations.serving.knative.dev revisions.serving.knative.dev routes.serving.knative.dev services.serving.knative.devExplanation:
- Service (not to mix up with standard Kubernetes Service object) controls the creation of other objects and manage the whole lifecycle of your workload.
- Route maps a network endpoint to one or more revisions.
- Configuration maintains the desired state for your deployment.
- Revision is a point-in-time immutable snapshot of the code and configuration for each modification made to the workload (as per the Twelve-Factor-App methodology).
$ kn service describe helloworld-go Name: helloworld-go Namespace: knative-test Age: 9m URL: http://helloworld-go.knative-test.example.com Address: http://helloworld-go.knative-test.svc.cluster.local Revisions: 100% @latest (helloworld-go-yfwdk-1) [1] (9m) Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b) Conditions: OK TYPE AGE REASON ++ Ready 9m ++ ConfigurationsReady 9m ++ RoutesReady 9mOf course we can clean the environment as fast as we "messed it":
$ kn service delete helloworld-go
Eventing test
We'll take the helloworld-go application from the eventing sample applications. It shows how to consume a CloudEvent in Knative eventing, and optionally how to respond back with another CloudEvent in the HTTP response. Since i wasn't able to find any corresponding image shared, i re-built it from source and then pushed to my local Kubernetes (no need for a third-party account).
$ go mod init helloworld.go go: creating new go.mod: module helloworld.go $ docker build . -t helloworld-go:local $ docker save helloworld-go > helloworld-go.tar $ microk8s ctr image import helloworld-go.tar $ rm helloworld-go.tar
Unlike Knative Serving with custom resource "Service", Knative Eventing currently does not offer anything such as a custom "Stream" resource.
$ kubectl get crds | grep eventing.knative | cut -d' ' -f1 apiserversources.sources.eventing.knative.dev brokers.eventing.knative.dev channels.eventing.knative.dev containersources.sources.eventing.knative.dev cronjobsources.sources.eventing.knative.dev eventtypes.eventing.knative.dev triggers.eventing.knative.dev
We actually need to create a custom resource "Broker", and a Trigger allowing a service "helloworld-go" to subscribe to it.
$ cat <<EOF | kubectl apply -f -
apiVersion: eventing.knative.dev/v1alpha1
kind: Broker
metadata:
name: default
namespace: knative-test
spec: {}
---
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: helloworld-go
namespace: knative-test
spec:
broker: default
filter:
attributes:
type: dev.knative.samples.helloworld
source: dev.knative.samples/helloworldsource
subscriber:
ref:
apiVersion: v1
kind: Service
name: helloworld-go
EOF
broker.eventing.knative.dev/default created
trigger.eventing.knative.dev/helloworld-go created
# Get the Broker URL
$ kubectl --namespace knative-test get broker default
NAME READY REASON HOSTNAME AGE
default False DeploymentUnavailable default-broker.knative-test.svc.cluster.local 32m
We will create a standard Kubernetes Service and deployment this time, but it should also work with kn:
$ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: helloworld-go namespace: knative-test spec: selector: app: helloworld-go ports: - protocol: TCP port: 80 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: helloworld-go namespace: knative-test spec: replicas: 1 selector: matchLabels: &labels app: helloworld-go template: metadata: labels: *labels spec: containers: - name: helloworld-go image: docker.io/library/helloworld-go:local EOF service/helloworld-go created deployment.apps/helloworld-go created # Deploy a curl pod and SSH into it $ kubectl --namespace knative-test run curl --image=radial/busyboxplus:curl -it [ root@curl:/ ]$ curl -v "http://default-broker.knative-test.svc.cluster.local/knative-samples/default" \ -X POST \ -H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \ -H "Ce-Specversion: 1.0" \ -H "Ce-Type: dev.knative.samples.helloworld" \ -H "Ce-Source: dev.knative.samples/helloworldsource" \ -H "Content-Type: application/json" \ -d '{"msg":"Hello World from the curl pod."}' exit # Display helloworld-go app logs kubectl --namespace knative-test logs -l app=helloworld-go --tail=50
<<Bonus: Operators?>>
Next steps
I am looking forward to any experience from companies using Knative in production, preferably in combination with a GitOps deployment model.
As far as i know, my organisation is not yet taking advantage of Serverless computing for production, but we are already looking at Function-as-a-Service (FaaS) and Cloud Events as part of different projects. In fact, we may re-factor our whole platform architecture in the future, in order to reduce infrastructure and maintenance costs.
Provided that Knative gets more mature until then, i assume that migrating our applications to it could be a sginificant but worth amount of work.
References
Comments
Post a Comment