In this tutorial, we will go through the step by step process to deploy an NGINX ingress controller on a Kubernetes cluster using an NGINX specification deployment.yaml file provided by the upstream NGINX project, and kubectl. There is an easier way to deploy NGINX ingress however, using Helm 3 package manager. If you have Helm 3 installed and configured for your Kubernetes cluster, use this tutorial instead - How to deploy NGINX Ingress Controller on Kubernetes using Helm
This tutorial uses a Platform9 Kubernetes cluster, however, you can use it to deploy NGINX on any Kubernetes cluster of your choice.
The vast majority of Kubernetes clusters are used to host containers that process incoming requests from microservices to full web applications. Having these incoming requests come into a central location, then get handed out via services in Kubernetes, is the most secure way to configure a cluster. That central incoming point is an ingress controller.
NGINX is the most popularly used ingress controller for Kubernetes clusters. NGINX has most of the features enterprises are looking for, and will work as an ingress controller for Kubernetes regardless of which cloud, virtualization platform, or Linux operating system your Kubernetes cluster is running on.
An ingress controller, because it is a core component of Kubernetes, requires specific configuration to be performed at the cluster level as part of installation. The NGINX project simplifies this by providing a single deployment yaml file that captures all the required steps for the cluster configuration. The yaml file is linked to from the Kubernetes documentation,
Before we specify steps to fetch and apply that yaml configuration, lets look into what goes inside that configuration. If you'd like to skip this, navigate to the bottom of this section to see the actual steps to apply the configuration and install the NGINIX ingress controller.
The recommended configuration for NGINX uses three Kubernetes ConfigMaps:
A Kubernetes service account is required to run NGINX as a service within the cluster. The service account needs to have following roles:
The NGINX deployment yaml specifies which ConfigMaps will be referenced, the container image, and any other specific information around how to run the NGINX Ingress controller.
To apply this configuration, run the following command:
xxxxxxxxxx
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
Output:
xxxxxxxxxx
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
Once the base configuration is in place, the next step is to expose the NGINX Ingress Controller to the outside world to allow it to start receiving connections. If your Kubernetes cluster is running on a public cloud like AWS, GCP, or Azure, this can be done by using a cloud native load-balancer. If on the other hand your Kubernetes cluster is running on your infrastructure in a data center, you can create a service with a NodePort to allow access to the Ingress Controller. In this guide we will do the later as we are deploying NGINX ingress on an on-premise Kubernetes cluster.
xxxxxxxxxx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
Using the NGINX-provided service-nodeport.yaml
file, which is located in GitHub, will define a service that runs on ports 80 and 443. It can be applied using a single command line, as done before.
xxxxxxxxxx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
The final step is to make sure the Ingress controller is running.
❯ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-wb4rm 0/1 Completed 0 17m
ingress-nginx ingress-nginx-admission-patch-dqsnv 0/1 Completed 2 17m
ingress-nginx ingress-nginx-controller-74fd5565fb-lw6nq 1/1 Running 0 17m
❯ kubectl get services ingress-nginx-controller --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.21.1.110 10.0.0.3 80:32495/TCP,443:30703/TCP 17m
Now that an ingress controller is running in the cluster, you will need to create the services that leverage it using either host, URI mapping, or both.
Sample of a host-based service mapping through an ingress controller using the type “Ingress”:
xxxxxxxxxx
apiVersion extensions/v1beta1
kind Ingress
metadata
name hello-world
annotations
kubernetes.io/ingress.class ingress-nginx
spec
rules
host host1.domain.ext
http
paths
backend
serviceName hello-world
servicePort80
Using a URI involves the same basic layout, but specifying more details in the “paths” section of the yaml file. When TLS encryption is required, then you will need to have certificates stored as secrets inside Kubernetes. This can be done manually or with an open-source tool like cert-manager. The yaml file needs a little extra information to enable TLS (mapping from port 443 to port 80 is done in the ingress controller):
xxxxxxxxxx
apiVersion extensions/v1beta1
kind Ingress
metadata
name hello-world
annotations
kubernetes.io/ingress.class ingress-nginx
cert-manager.io/cluster-issuer letsencrypt-prod
spec
tls
hosts
host1.domain.ext
host2.domain.ext
secretName hello-kubernetes-tls
rules
host host1.domain.ext
http
paths
backend
serviceName hello-world
servicePort80
With a fully functioning cluster and ingress controller, even a single node one, you are ready to start building and testing applications just like you would in your production environment, with the same ability to test your configuration files and application traffic routing. You just have some capacity limitations that won’t happen on true multi-node clusters.