In this tutorial, we will go through the step by step process to deploy an NGINX ingress controller on a Kubernetes cluster using an NGINX specification deployment.yaml file provided by the upstream NGINX project, and kubectl. There is an easier way to deploy NGINX ingress however, using Helm 3 package manager. If you have Helm 3 installed and configured for your Kubernetes cluster, use this tutorial instead - How to deploy NGINX Ingress Controller on Kubernetes using Helm
This tutorial uses a Platform9 Kubernetes cluster, however, you can use it to deploy NGINX on any Kubernetes cluster of your choice.
The vast majority of Kubernetes clusters are used to host containers that process incoming requests from microservices to full web applications. Having these incoming requests come into a central location, then get handed out via services in Kubernetes, is the most secure way to configure a cluster. That central incoming point is an ingress controller.
NGINX is the most popularly used ingress controller for Kubernetes clusters. NGINX has most of the features enterprises are looking for, and will work as an ingress controller for Kubernetes regardless of which cloud, virtualization platform, or Linux operating system your Kubernetes cluster is running on.
An ingress controller, because it is a core component of Kubernetes, requires specific configuration to be performed at the cluster level as part of installation. The NGINX project simplifies this by providing a single deployment yaml file that captures all the required steps for the cluster configuration. The yaml file is linked to from the Kubernetes documentation,
Before we specify steps to fetch and apply that yaml configuration, lets look into what goes inside that configuration. If you'd like to skip this, navigate to the bottom of this section to see the actual steps to apply the configuration and install the NGINIX ingress controller.
The recommended configuration for NGINX uses three Kubernetes ConfigMaps:
A Kubernetes service account is required to run NGINX as a service within the cluster. The service account needs to have following roles:
The NGINX deployment yaml specifies which ConfigMaps will be referenced, the container image, and any other specific information around how to run the NGINX Ingress controller.
To apply this configuration, run the following command:
xxxxxxxxxx$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yamlOutput:
xxxxxxxxxxnamespace/ingress-nginx createdserviceaccount/ingress-nginx createdclusterrole.rbac.authorization.k8s.io/ingress-nginx createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx createdrole.rbac.authorization.k8s.io/ingress-nginx createdrolebinding.rbac.authorization.k8s.io/ingress-nginx createdservice/ingress-nginx-controller-admission createdservice/ingress-nginx-controller createddeployment.apps/ingress-nginx-controller createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission createdserviceaccount/ingress-nginx-admission createdclusterrole.rbac.authorization.k8s.io/ingress-nginx-admission createdrole.rbac.authorization.k8s.io/ingress-nginx-admission createdrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdjob.batch/ingress-nginx-admission-create createdjob.batch/ingress-nginx-admission-patch createdOnce the base configuration is in place, the next step is to expose the NGINX Ingress Controller to the outside world to allow it to start receiving connections. If your Kubernetes cluster is running on a public cloud like AWS, GCP, or Azure, this can be done by using a cloud native load-balancer. If on the other hand your Kubernetes cluster is running on your infrastructure in a data center, you can create a service with a NodePort to allow access to the Ingress Controller. In this guide we will do the later as we are deploying NGINX ingress on an on-premise Kubernetes cluster.
xxxxxxxxxxkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yamlUsing the NGINX-provided service-nodeport.yaml file, which is located in GitHub, will define a service that runs on ports 80 and 443. It can be applied using a single command line, as done before.
xxxxxxxxxxkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yamlThe final step is to make sure the Ingress controller is running.
❯ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginxNAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx ingress-nginx-admission-create-wb4rm 0/1 Completed 0 17mingress-nginx ingress-nginx-admission-patch-dqsnv 0/1 Completed 2 17mingress-nginx ingress-nginx-controller-74fd5565fb-lw6nq 1/1 Running 0 17m❯ kubectl get services ingress-nginx-controller --namespace=ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx-controller LoadBalancer 10.21.1.110 10.0.0.3 80:32495/TCP,443:30703/TCP 17mNow that an ingress controller is running in the cluster, you will need to create the services that leverage it using either host, URI mapping, or both.
Sample of a host-based service mapping through an ingress controller using the type “Ingress”:
xxxxxxxxxxapiVersionextensions/v1beta1kindIngressmetadata namehello-world annotations kubernetes.io/ingress.classingress-nginxspec ruleshosthost1.domain.ext http pathsbackend serviceNamehello-world servicePort80Using a URI involves the same basic layout, but specifying more details in the “paths” section of the yaml file. When TLS encryption is required, then you will need to have certificates stored as secrets inside Kubernetes. This can be done manually or with an open-source tool like cert-manager. The yaml file needs a little extra information to enable TLS (mapping from port 443 to port 80 is done in the ingress controller):
xxxxxxxxxxapiVersionextensions/v1beta1kindIngressmetadata namehello-world annotations kubernetes.io/ingress.classingress-nginx cert-manager.io/cluster-issuerletsencrypt-prodspec tlshostshost1.domain.exthost2.domain.ext secretNamehello-kubernetes-tls ruleshosthost1.domain.ext http pathsbackend serviceNamehello-world servicePort80With a fully functioning cluster and ingress controller, even a single node one, you are ready to start building and testing applications just like you would in your production environment, with the same ability to test your configuration files and application traffic routing. You just have some capacity limitations that won’t happen on true multi-node clusters.