Serverless on Kubernetes

Serverless on Kubernetes

After reading this title, you must be thinking about how you can put two different things together, which do not seem to fit together at all like serverless and Kubernetes.

First, let us clarify.

What Do We Mean by Serverless?

At a high level, serverless is just a deployment method that completely abstracts the servers away. A developer just wants to write their application code, press a button, and get their application served without planning a deployment, setting up the auto-scaling, or any other complex infrastructure tasks. For some use cases, it is arguably more developer-friendly that regular old Kubernetes. As the definition of serverless has matured, two distinctive patterns have emerged:

Containers as a service Where you deploy your application as a container and expect the platform to do everything else for you. You can run the containers in the Kubernetes, but you still have some moving parts to configure, deploy, update, and auto-scale. The containers as a service platform simply accepts a single container from a developer and does everything else for them. Popular cloud vendors in this space are Google Cloud Run and AWS Fargate.

Functions as a service Rather than deploy an entire container, you deploy a function, literally a snippet of code that could only contain one method, then you build your service by combining multiple functions together. Popular cloud vendors in this space are Google Cloud Functions and AWS Lambda. So, serverless is an interesting way of doing things but…

How Does This Relate to Kubernetes?

Well, as an operation team running workloads, you should think of Kubernetes as a distributed platform for running all your infrastructure, regardless of how it was built or needs to be deployed. New projects may embrace serverless and look to deploy an entire stack on serverless functions. However, there are still use cases where this method won’t be appropriate as software limitation simply won’t allow it. Also, you may have to deploy software for many different teams, all with their own requirements. The chances are that the bulk of your workloads may be based on Kubernetes’ native container deployment. But if the developer wants to compliment this workload with occasional serverless functions, Kubernetes is able to provide services that also support serverless deployments, so you can still run a single more extensive distributed system without having to dilute your efforts across lots of different systems.

How Does Kubernetes Do This?

For serverless containers, there is the Knative project This provides serverless deployment of “click to deploy” for containers. It achieves this by adding new custom resources to Kubernetes for serverless container deployments. This means, from a developer’s point-of-view, you just build your containers and deploy them to Knative. Knative takes care of networking, revision-tracking, auto-scaling, which will increase with demand and scale back to zero when it needs to.

You can play with Knative by using Cloud Run This is a managed GCP product that implements the same services as Knative without requiring a GKE cluster. You can, of course, extend your own GKE cluster by installing Knative or let Google do it for you by adding Cloud Run on GKE when building your cluster, this being for the container as a service. Now, what about functions?

image.png

gcloud beta container clusters create $CLUSTER_NAME \
  --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio \
  --machine-type=n1-standard-4 \
  --cluster-version=latest --zone=$CLUSTER_ZONE \
  --enable-stackdriver-kubernetes --enable-ip-alias \
  --enable-autoscaling --min-nodes=1 --max-nodes=10 \
  --enable-autorepair \
  --scopes cloud-platform

For installing Knative in other clusters, please refer to the Knative docs.

Deploying the Application Using Knative You can simply use a .yaml file to deploy your containerized application like this:

Run this .yaml file in your GKE cluster using:

kubectl apply -f filename.yaml

The first time we deploy an app with Knative, we need to retrieve the IP address that is being configured in your cluster’s front end. To do this, we can do:

export IP_ADDERSS=$(kubectl get svc istio-ingressgateway --namescpace istio-system --output 'jsonpath={.status.loadbalancer.ingress[0].ip}')

Finally, to get the output domain, you need to do:

kubectl get route helloworld --output=custom-coloum=Name:.metadata.name,DOMAIN:.status.domain

After this command, we get a domain column next to our app’s name. Let’s call it [DomainName]. Copy this domain and paste in the next command to see the output in the terminal.

Serverless Functions on Kubernetes Using OpenFaaS

If you have ever worked with Cloud Functions or AWS Lambda, the chances are you are very familiar with serverless functions; these are functions as a service or Faas platforms.

You can use this functionality in Kubernetes by installing an open-source project called OpenFaaS. There are also other options available out there like Kubeless and Fission, but we will stick to OpenFaaS for now.

Functions that are deployed in OpenFaaS are still containers, but they are designed to be as minimal as possible, and unlike the container you deploy as their own service, they don’t need to contain a web serving component.

HTTP requests are abstracted through a dedicated component of OpenFaaS to keep the containers themselves as streamlined as possible. So, in this way, OpenFaaS is still not as lightweight as a dedicated platform like Google Cloud Functions, but it is much more customizable. It does let you run the serverless function’s deployment method inside your own GKE cluster.

For OpenFaaS installation guide, please refer to their website. We can deploy a function using OpenFaaS by simply pointing at a definition file in GitHub using:

faas-cli deploy -f https://raw.githubusercontents.com/openfaas/faas/master/stack.yaml

This will install a handy web console to deploy the functions you want. If you are breaking down your stack into a component service and adopting the serverless component fashion, OpenFaaS gives you a fully functional serverless platform while still running inside your own Kubernetes cluster.