In this guide, we will go through step by step process to deploy an NGINX ingress controller on a Kubernetes cluster using Helm 3.
The vast majority of Kubernetes clusters are used to host containers that process incoming requests from microservices to full web applications. Having these incoming requests come into a central location, then get handed out via services in Kubernetes, is the most secure way to configure a cluster. That central incoming point is an ingress controller.
NGINX is the most popularly used ingress controller for Kubernetes clusters. NGINX has most of the features enterprises are looking for, and will work as an ingress controller for Kubernetes regardless of which cloud, virtualization platform, or Linux operating system your Kubernetes cluster is running on.
This tutorial uses a Platform9 Managed Kubernetes cluster, however, you can use it to deploy NGINX on any Kubernetes cluster of your choice.
An ingress controller, because it is a core component of Kubernetes, requires specific configuration to be performed at the cluster level as part of installation. Let's look into what happens behind the scenes when you install NGINX ingress using helm. If you'd like to skip this, navigate to the bottom of this section to see the actual steps to install the NGINIX ingress controller.
The recommended configuration for NGINX uses three Kubernetes ConfigMaps:
A Kubernetes service account is required to run NGINX as a service within the cluster. The service account needs to have following roles:
To install an NGINX Ingress controller using Helm, add the nginx-stable repository to helm, then run helm repo update . After we have added the repository we can deploy using the chart nginx-stable/nginx-ingress.
xxxxxxxxxxhelm repo add nginx-stable https://helm.nginx.com/stablehelm repo updateThe following command installs the chart with the release name nginx-ingress :
xxxxxxxxxxhelm install nginx-ingress nginx-stable/nginx-ingress --set rbac.create=trueMake sure the NGINX Ingress controller is running.
xxxxxxxxxxkubectl get pods --all-namespaces -l app=nginx-ingress-nginx-ingressxxxxxxxxxxNAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-ingress-nginx-ingress-5d55d8b9dc-v46bg 1/1 Running 0 2m14sxxxxxxxxxxkubectl get services nginx-ingress-nginx-ingressxxxxxxxxxxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx-ingress-nginx-ingress LoadBalancer 10.21.252.165 192.168.86.11 80:30451/TCP,443:31967/TCP 47sNow that an ingress controller is running in the cluster, you will need to create services that leverage it using either host, URI mapping, or even both.
Sample of a host-based service mapping through an ingress controller using the type “Ingress”:
xxxxxxxxxxapiVersionnetworking.k8s.io/v1kindIngressmetadata namehello-world annotationsspec ingressClassNamenginx ruleshosthost1.domain.ext http pathspathTypePrefix path"/" backend service namehello-world port number80Using a URI involves the same basic layout, but specifying more details in the “paths” section of the yaml file. When TLS encryption is required, then you will need to have certificates stored as secrets inside Kubernetes. This can be done manually or with an open-source tool like cert-manager. The yaml file needs a little extra information to enable TLS (mapping from port 443 to port 80 is done in the ingress controller):
xxxxxxxxxxapiVersionnetworking.k8s.io/v1kindIngressmetadata namehello-world annotations cert-manager.io/cluster-issuerletsencrypt-prodspec ingressClassNamenginx tlshostshost1.domain.exthost2.domain.ext secretNamehello-kubernetes-tls ruleshosthost1.domain.ext http pathspathTypePrefix path"/" backend service namehello-world port number80With a fully functioning cluster and ingress controller, even a single node one, you are ready to start building and testing applications just like you would in your production environment, with the same ability to test your configuration files and application traffic routing. You just have some capacity limitations that won’t happen on true multi-node clusters.