How to Set Up Knative Serving on Kubernetes

Knative is an exciting project that backs many of the services you may already be using. It simplifies configuration of services on Kubernetes which can speed up the ability for developers to quickly use the platform without having to understand everything backing it. While many of the features Knative provides can be configured on their own, or through maintaining multiple configuration files or Helm charts, the value comes from the simplicity in the Knative Service definition. Knative is one of my favorite projects in the Kubernetes Ecosystem. With the recent 1.0 release, and application to add Knative to incubating projects under CNCF, there’s never been a better time to install it and try it out.

Pre-requisites

In our demo we are going to be using an AWS cluster deployed using Platform9 Managed Kubernetes. Other cloud providers should work as we are just deploying a vanilla Kubernetes cluster with LoadBalancer access.

Documentation on how to set this up can be found here: https://platform9.com/docs/kubernetes/get-started-aws.

  • A cluster with at least 2 worker nodes is required for this demo. Without at least 2 worker nodes you may run into issues such as:
    • Warning  FailedScheduling  22s (x5 over 38s)  default-scheduler  0/2 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: true}, that the pod didn’t tolerate.
  • This installation demo should work on any environment that is using a publicly accessible LoadBalancer.
    • We are going to use http01 challenges to setup TLS which means Letsencrypt will need to be able to access our endpoint.

Installing Knative Serving Community Documentation

This blog post will rely on Knative installation guides. We have selected an opinionated install and provided the steps in order, to allow for a quick installation.

For community provided installation instructions please follow:

https://knative.dev/docs/install/serving/install-serving-with-yaml/

Knative Install QuickStart with Kourier and AutoTLS

Our installation is going to use Knative Serving with Kourier as the Service Mesh. Other Service Meshes, such as Istio, can also be used.

Install Steps

Install Service CRDs

Knative uses Custom Resource Definitions to expand on what can be created within Kubernetes. We will create the CRDs before installing Knative Serving so that the resources are available without running into race conditions with the API. Run the following command to create the Knative CRDs.

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.1.0/serving-crds.yaml

After we have installed the CRDs we can now define new Kubernetes resources, which will be managed by the Knative Serving installation.

$ kubectl get crd|grep -i knative
certificates.networking.internal.knative.dev          2022-02-02T19:39:11Z
clusterdomainclaims.networking.internal.knative.dev   2022-02-02T19:39:11Z
configurations.serving.knative.dev                    2022-02-02T19:39:11Z
domainmappings.serving.knative.dev                    2022-02-02T19:39:12Z
images.caching.internal.knative.dev                   2022-02-02T19:39:13Z
ingresses.networking.internal.knative.dev             2022-02-02T19:39:12Z
metrics.autoscaling.internal.knative.dev              2022-02-02T19:39:12Z
podautoscalers.autoscaling.internal.knative.dev       2022-02-02T19:39:12Z
revisions.serving.knative.dev                         2022-02-02T19:39:12Z
routes.serving.knative.dev                            2022-02-02T19:39:12Z
serverlessservices.networking.internal.knative.dev    2022-02-02T19:39:13Z
services.serving.knative.dev                          2022-02-02T19:39:13Z

Install Serving Core

Next we need to install Knative Serving.

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.1.0/serving-core.yaml

Serving Core will install a few different resources in the knative-serving namespace. Let’s take a quick look at a couple of them.

  • Activator – watches for requests to services and notifies the autoscaler. If there isn’t a pod backing a revision yet, it will retry the request once a pod exists.
  • Autoscaler – scales services(pods) up/down based on received metrics.
  • Controller – watches for Knative resources as defined by the CRDs we installed. When a resource is created it will take action and attempt to reconcile the desired state.
  • Webhook – validates requests to the Kubernetes API for Knative CRDs. If a malformed YAML is sent, it will reject it with an error to correct.

Each of the resources does a bit more than what I mentioned above. For more information on each service, check out the Knative Docs.

Install Kourier

We need to configure a service mesh to route traffic and provide the features required by Knative. Run the following command to install Kourier.

kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.1.0/kourier.yaml

Now that we have installed Kourier we need to set it as our service mesh in Knative.

kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

Next we need to take a look at the External IP assigned to our Kourier service. This is what we are going to use to configure DNS for our subdomain.

kubectl --namespace kourier-system get service kourier
$ kubectl --namespace kourier-system get service kourier
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)                      AGE
kourier   LoadBalancer   10.21.242.46   abc12780202894644b378091161ef43b-1281173451.us-west-2.elb.amazonaws.com   80:32527/TCP,443:30809/TCP   41s

Setup DNS

If your LoadBalancer provides a hostname then you will need to create a CNAME record through your DNS provider. In our example we are going to use *.knative.pf9.io. If your LoadBalancer provides an IP then you will need to setup an A record instead. Make sure you are selecting ALL subdomains for your record, *.knative.pf9.io is the actual record you will create.

The domain used will actually be your domain. We are using pf9.iofor our examples, however you may be using something completely different.

$ host hello.default.knative.pf9.io
hello.default.knative.pf9.io is an alias for ab9019390492a4e2787bdbca2403c1aa-1524719612.us-west-2.elb.amazonaws.com.
ab9019390492a4e2787bdbca2403c1aa-1524719612.us-west-2.elb.amazonaws.com has address 34.214.188.12
ab9019390492a4e2787bdbca2403c1aa-1524719612.us-west-2.elb.amazonaws.com has address 34.213.195.92

Digital Ocean Record Creation Example

$ doctl compute domain records create pf9.io --record-type CNAME --record-name *.knative --record-data ab9019390492a4e2787bdbca2403c1aa-1524719612.us-west-2.elb.amazonaws.com.
ID           Type     Name          Data                                                                       Priority    Port    TTL     Weight
293275512    CNAME    *.knative    ab9019390492a4e2787bdbca2403c1aa-1524719612.us-west-2.elb.amazonaws.com    0           0       1800    0

After picking the hostname we need to edit the config-domain configmap to use our domain instead of the example domain.

kubectl patch configmap/config-domain \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"knative.pf9.io":""}}'

Take a quick look to make sure that our domain was configured correctly. I’ve removed most of the example content to show that our domain has been added to the configuration.

$ kubectl describe configmap/config-domain -n knative-serving
Name:         config-domain
Namespace:    knative-serving
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/name=knative-serving
              app.kubernetes.io/version=1.1.0
              serving.knative.dev/release=v1.1.0
Annotations:  knative.dev/example-checksum: 81552d0b

Data
====
_example:
...

# Routes having the cluster domain suffix (by default 'svc.cluster.local')
# will not be exposed through Ingress. You can define your own label
# selector to assign that domain suffix to your Route here, or you can set
# the label
#    "networking.knative.dev/visibility=cluster-local"
# to achieve the same effect.  This shows how to make routes having
# the label app=secret only exposed to the local cluster.
svc.cluster.local: |
  selector:
    app: secret

knative.pf9.io:
----

Tip: Now that we have configured DNS, we may want to wait until the DNS resolves before proceeding. The next step will set up net-http01 which will issue http based challenges using Let’s Encrypt to issue a certificate. We need a DNS record that will resolve so that we don’t run into issues with issuing the certificate.

Configure Auto-TLS

Next we are going to configure Auto-TLS. This will give us the ability to deploy services with an https endpoint and automate certificate configuration. We are using Knative net-http01 instead of installing and managing our own Cert-Manager configuration.

First we will apply the net-http01 yaml.

kubectl apply -f https://github.com/knative/net-http01/releases/download/knative-v1.1.0/release.yaml

Then we will patch our setup to use net-http01 and enable Auto-TLS, which is disabled by default.

kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"certificate-class":"net-http01.certificate.networking.knative.dev"}}'
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"auto-tls":"Enabled"}}'

Optional

Optional: Install the Knative CLI

At this point we can install the Knative command line to make it easier to interact with Knative based resources.

https://knative.dev/docs/install/client/install-kn/

Create a Basic Knative Service

Let’s create our first service, from the Knative documentation. This service will create everything we need for a working TLS endpoint. Save the following in a file called hello.yaml.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
spec:
  template:
    metadata:
      # This is the name of our new "Revision," it must follow the convention {service-name}-{revision-name}
      name: hello-world
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "World"

Some of the information in this YAML may look familiar. We are creating a Service and providing information that is similar to a Kubernetes Deployment/Pod/Service, however if we look at the apiVersion we are using serving.knative.dev/v1 instead of *.k8s.io/, which means we are actually defining a Knative Service as defined by the CRD: `services.serving.knative.dev`.

The power of Knative comes from the ability to create multiple resources based on a single service definition. In Kubernetes you may define multiple services to build out everything required for your service to work, including certificates, ingress, deployments, and more. With the Knative Service we have defined a couple of things, such as which container image we want to use and a name. So what do we get when we create a Service? Let’s set it up and take a look.

Kubectl
kubectl create -f hello.yaml

List out the kservice that was created. This is different than a service since it is using the Knative CRD Service type. We will see our endpoint, which has already been configured with TLS.

$ kubectl get kservice
NAME    URL                                          LATESTCREATED   LATESTREADY   READY   REASON
hello   https://hello.default.knative.pf9.io   hello-world     hello-world   True    

When we list out the service we can see a “route” was created, which is actually a ClusterIP and related to the Knative Route CRD.

$ kubectl get service
NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP                                         PORT(S)                                      AGE
hello                                        ExternalName   <none>          kourier-internal.kourier-system.svc.cluster.local   80/TCP                                       4h58m
hello-world                                  ClusterIP      10.21.208.174   <none>                                              80/TCP                                       4h58m
hello-world-private                          ClusterIP      10.21.88.220    <none>                                              80/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP   4h58m
route-9e6a1819-c308-41c9-83fa-e4795b23bb97   ClusterIP      10.21.199.252   <none>                                              80/TCP                                       4h58m

The certificate is a lot like the Cert-Manager Certificate, however in our installation this falls under the Knative CRD Certificate definition. If you are having difficulty resolving HTTPS, or if you run into certificate issues, this is where you will want to start troubleshooting. Running describe against the certificate should provide Events describing the current issue.

If you are running into issues with being able to hit the endpoint then it could be due to DNS resolution. Another common error is rate limiting based on failed attempts to issue the certificate. Verify DNS resolution and then recreate the service.

$ kubectl get certificate
NAME                                         READY   REASON
route-9e6a1819-c308-41c9-83fa-e4795b23bb97   True

Digging into the revision can give us details about the current state of the service. In our example we haven’t modified the revision so it is Generation 1. We are not currently getting requests against the service so it has also scaled to 0.

$ kubectl get revision
NAME          CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON   ACTUAL REPLICAS   DESIRED REPLICAS
hello-world   hello                            1            True             0                 0

Once we start seeing requests we will see the Actual Replicas and Desired Replicas increase. We can set the minimum and maximum replica count when creating the service, or update it after it is running. By default it should scale to 0 if the service is not receiving traffic.

$ kubectl get revisions
NAME          CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON   ACTUAL REPLICAS   DESIRED REPLICAS
hello-world   hello                            1            True             1                 1

Here we can see the Route. If we describe the Route it looks a little bit like an Ingress resource where it defines the Host and where traffic should go. In the basic setup you should see 100% traffic going to Revision Name hello-world. Since we are using a Service Mesh we could configure traffic to split based on different revisions.

$ kubectl get route
NAME    URL                                          READY   REASON
hello   https://hello.default.knative.pf9.io   True    
Knative CLI

Now that we have gone through how to find resources using kubectl, we will show the corresponding Knative CLI options.

kn service create -f hello.yaml
$ kn service list
NAME    URL                                          LATEST        AGE    CONDITIONS   READY   REASON
hello   https://hello.default.knative.pf9.io   hello-world   5h1m   3 OK / 3     True    
$ kn route list
NAME    URL                                          READY
hello   https://hello.default.knative.pf9.io   True
$ kn revision list
NAME          SERVICE   TRAFFIC   TAGS   GENERATION   AGE    CONDITIONS   READY   REASON
hello-world   hello     100%             1            5h4m   3 OK / 4     True    

Conclusion

Kubernetes is a very powerful tool for developers, however it is also complicated. Starting developers out with Knative can reduce the overhead of learning how each resource works within Kubernetes. What would normally take multiple resource configurations, or YAML files, can be condensed into a single Service YAML file. Abstracting away many of the configuration options removes blockers, and can increase startup time and productivity. Knative provides a lot more than resource abstraction, it also provides feature sets that can modernize your applications without the headache of architecting everything on your own.

Mike Petersen

You may also enjoy

Argo CD vs Tekton vs Jenkins X: Finding the Right GitOps Tooling

By Platform9

Platform9 Introduces Elastic Machine Pool (EMP) at KubeCon 2023 to Optimize Costs on AWS EKS

By Platform9

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: