Learning About Kubernetes Admission Controllers and OPA Gatekeeper
Interested in Open Policy Agent and the Gatekeeper Project? In this post we will go over admission controllers, OPA, and Gatekeeper.
The Kubernetes (K8s) platform consists of several components that all work together in sync to provide advanced container orchestration and deployment strategies. In order to support an ever-increasing set of requirements, this system is built upon extensibility and a modular architectural style.
One of those extensibility features is admission controllers. These are software extensions that work alongside K8s and intercept requests to the K8s API before they are processed by the underlying system.
In this article, we will examine the more technical details of admission controllers and the benefits of gating admission. Then, we will explain the Open Policy Agent (OPA) and show you how its Gatekeeper agent provides first-class integration between OPA and K8s.
Let’s get started.
Understanding Admission Controllers
In simple terms, admission controllers are pieces of software that filter incoming requests and check what is allowed to run on your cluster. They are similar to filter functions in web frameworks like Spring or Rails, except that their scope is limited to creating, deleting, modifying, or connecting requests. Admission controllers are useful for compliance, extra validation, and security and policy requirements.
Admission controllers consist of a list of modules compiled into the kube-apiserver application server, and they can be enabled/disabled using the enable-admission-plugins
configuration value. Depending on your Kubernetes distribution, you may not have the option to configure this list. However, you can leverage the following compiled plugins:
- MutatingAdmissionWebhook: When an incoming request happens, this admission controller triggers registered mutating admission webhooks one by one. These webhooks can modify objects sent to the API server or enforce custom defaults options.
- ValidatingAdmissionWebhook: After the incoming request object passes through the mutating admission webhooks and the object schema validation phases, it goes through the ValidatingAdmissionWebhook, which is another list of registered admission webhooks that can reject incoming requests and enforce custom policies.
Webhooks are standalone pieces of software that listen to incoming requests from the API server, perform their logic, and respond back with the results. Webhooks can be internal, or they can be external services that are deployed elsewhere (you will need to provide the url on the config).
It is critical to ensure that the internal communication between the API server and the webhook server is cryptographically secured (with TLS) and working properly. This is because the API server only communicates over HTTPS, and it needs the right certificate information. You can have K8s API sign its own certs for these purposes.
Let’s take a look at an example admission controller webhook.
Example Use Case of an Admission Controller
You can experiment with configuring and deploying webhooks by following this demo. You will need to clone the repository and make sure that you have Go installed. Let’s walk through the basic steps of deploying an admission controller in K8s.
First, you need to create a CA certificate and private key for the webhook server, and then deploy the resources in your Kubernetes cluster.
Generate the private key for the webhook server:
> openssl genrsa -out server-key.pem 2048
Generate the CA cert and private key:
> openssl req -nodes -new -x509 -keyout ca.key -out ca.crt -subj "/CN=Admission Controller Webhook Demo CA"
Generate a Certificate Signing Request (CSR) for the private key and sign it with the private key of the CA:
> openssl req -new -key server-key.pem -subj "/CN=webhook-server.webhook-demo.svc" | openssl x509 -req -CA ca.crt -CAkey ca.key -CAcreateserial -out server-key.crt
> Signature ok
> subject=/CN=webhook-server.webhook-demo.svc
> Getting CA Private Key
Create server cert/key CSR and send it to K8s API:
> cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: webhook-csr
spec:
groups:
- system:authenticated
request: $(cat server.csr | base64 | tr -d '\n')
signerName: kubernetes.io/webhook-app
usages:
- digital signature
- key encipherment
- server auth
EOF
> certificatesigningrequest.certificates.k8s.io/webhook-csr created
Approve and fetch the signed certificate:
You approve the certificate by using the kubectl command and assigning it in a variable:
> kubectl certificate approve webhook-csr certificatesigningrequest.certificates.k8s.io/webhook-csr approved
> serverCert=$(kubectl get csr webhook-csr -o jsonpath='{.spec.request}')
Create the TLS secret for the generated keys:
> kubectl create secret tls webhook-server-tls \
--cert "server-key.crt" \
--key "server-key.pem"
Create the webhook configuration using the sample image:
Create a deployment, service, and MutatingWebhookConfiguration manifest using the demo image and the secret that we defined earlier. This hook simply enforces more secure defaults for running containers by excluding non-root users. So, if you attempt to deploy a container as root, it will be rejected.
> cat deployment.yaml.template
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-server
labels:
app: webhook-server
spec:
replicas: 1
selector:
matchLabels:
app: webhook-server
template:
metadata:
labels:
app: webhook-server
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1234
containers:
- name: server
image: stackrox/admission-controller-webhook-demo:latest
imagePullPolicy: Always
ports:
- containerPort: 8443
name: webhook-api
volumeMounts:
- name: webhook-tls-certs
mountPath: /run/secrets/tls
readOnly: true
volumes:
- name: webhook-tls-certs
secret:
secretName: webhook-server-tls
---
apiVersion: v1
kind: Service
metadata:
name: webhook-server
spec:
selector:
app: webhook-server
ports:
- port: 443
targetPort: webhook-api
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: demo-webhook
webhooks:
- name: webhook-server.webhook-demo.svc
clientConfig:
service:
name: webhook-server
namespace: default
path: "/mutate"
caBundle: ${CA_PEM_B64}
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
Deploy it in K8s:
Place the server certificate (in base64 format) into the caBundle
field and apply it:
> sed -e 's@${CA_PEM_B64}@'"$serverCert"'@g' <"deployment.yaml.template" | kubectl create -f -
Test with a conflicting manifest:
Create a deployment that violates the runAsNonRoot:true
policy:
> cat example-pod2.yml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-violation
labels:
app: pod-with-violation
spec:
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 0
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "echo running as user $(id -u)"]
> kubectl apply -f example-pod.yml pod/pod-with-conflict created
This should fail because the webhook-server
has been instructed not to allow any container to run as root. You can inspect the pod deployment status with the CreateContainerConfigError
status type:
❯ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-example 1/1 Running 7 20h
pod-with-violation 0/1 CreateContainerConfigError 0 10s
webhook-server-7c8b68dccc-vjx9g 1/1 Running 0 7m32s
Introduction to Open Policy Agent (OPA) and Gatekeeper
Using code to define compliance rules and security policies is not very scalable in the long run. That’s because software development goes through various stages from inception, triage, design, prioritization, development, testing, deployment, and verification.
Applying security policies and rules using conventional software development is only suitable for small, isolated rules that do not change frequently. Ideally, you want to have a more flexible way to define and deploy those rules without going through the SDLC process. Your security administrators need to be able to enforce policies on the infrastructure components without touching code or recompiling software modules.
This is where Open Policy Agent (OPA) comes into play. OPA represents a framework for applying policy decisions in your clusters. It works by using a high-level declarative language to specify policies as code and then push them into a policy engine. This policy engine can parse and understand the language, and then enforce the rules dynamically.
Since they are framework-agnostic, you can run OPA agents across your services as sidecars, libraries, or daemons. If you want to have even tighter integration with K8s, you can use OPA Gatekeeper.
OPA Gatekeeper is an agent that integrates OPA with K8s using the concept of admission controllers that we explored earlier. It would seem as though it runs as a custom resource definition based upon its deployment manifest, but in reality, it acts as a validating webhook server that enforces policies executed by OPA.
If you want to check out Gatekeeper on a fresh K8s cluster, you can install it with the deployment manifest like this:
> kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.4/deploy/gatekeeper.yaml
> kubectl wait -n gatekeeper-system pod --all --for=condition=Ready --timeout=3m
Once it’s deployed and ready to go, you can start declaring and enforcing policies. There are two steps involved:
- Declare policies by defining a constraint template.
- Declare constraints that use the registered constraint template as a kind field.
A constraint template is just another yaml configuration template that you can apply using the kubectl command. You can try out examples from this official repo.
> kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/templates/k8srequiredlabels_template.yaml constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created
This manifest declares a new constrained type for required labels named k8srequiredlabels.
Once it’s deployed, you can assign specific policies using the kind constraint template. For example, the following policy requires all namespaces to declare the organization name in their label:
❯ cat label-constraints.yml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-org-name
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["wpengine"]
You can test this policy by trying to create and apply a new namespace. You will be presented with an error, and the operation will be aborted:
❯ cat ns-example.yml
apiVersion: v1
kind: Namespace
metadata:
name: example
> kubectl apply -f ns-example.yml
Error from server ([ns-must-have-org-name] you must provide labels: {"wpengine"}): error when creating "ns-example.yml": admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-org-name] you must provide labels: {"wpengine"}
This error means that you need to provide a label with the defined name wpengine, according to the current namespace policies. This is a very restrictive policy, of course, as it would not allow any other namespace to be created, but the Rego policy language offers a lot of flexibility because it is quite expressive.
From a security standpoint, applying and using admission policies like this is more flexible and conformant. Once you have declared a finite set of constrained templates, you can apply different kinds of constraints based on change requests which will also fall under the rules of the predefined template. The rule will not be applied if something goes wrong, and it will fail to pass through the CI/CD pipeline.
Next Steps for using Open Policy Agent
In this article, we showed you how to validate and enforce custom policies for API server requests using admission controllers. We also explained how to control, transform, and validate incoming API server requests before persisting them into the cluster by validating admission and using mutating admission webhooks.
We then introduced Gatekeeper OPA and explained how it simplifies the lifecycle of creating and maintaining OPA policies within K8s clusters. As we noted above, Gatekeeper provides an extensible policy library with custom constraints and audit support.
If you want to delve deeper into this subject, we recommend reading the official docs and practicing with the sample repos.
Already have a PMK account? Why not give the App Catalog a try using the Open Policy Agent Gatekeeper Helm charts.
The chart is hosted @
https://open-policy-agent.github.io/gatekeeper/charts
- Navigating the future of enterprise IT: The rise of developer-friendly private clouds - December 17, 2024
- Beyond Kubernetes Operations: Discover Platform9’s Always-On Assurance™ - November 29, 2023
- KubeCon 2023 Through Platform9’s Lens: Key Takeaways and Innovative Demos - November 14, 2023