During the last tutorials, we’ve explored the Platform9 Managed Kubernetes Dashboard, created some clusters and installed some example applications.
In this tutorial, we are going to expand our examples by deploying a more complex microservice. The idea is to make you more comfortable with the platform and to show you how you can leverage it for more advanced scenarios.
In this example, we are going to see the deployment of:
We assume that you have already set up a Platform9 cluster with at least one node, and the cluster is ready.
Let’s start with the Redis parts.
Redis is a key-value in-memory store that is used mainly as a cache service. To set up Clustering for Data Replication, we need a Redis instance that acts as Master, together with additional instances as slaves. Then the guestbook application can use this instance to store data. The Redis master will propagate the writes to the slave nodes.
We can initiate a Redis Master deployment in a few different ways: either using the kubectl
tool, the Platform9 UI or the Kubernetes UI. For convenience, we use the kubectl
tool as it’s the most commonly understood in tutorials.
First, we need to create a Redis Cluster Deployment. Looking at their documentation here, to set up a cluster, we need some configuration properties. We can leverage kubernetes configmaps
to store and reference them in the deployment spec.
We need to save a script and a redis.conf
file that is going to be used to configure the master and slave nodes.
Create redis-cluster.config.yml
file with the following configuration:
xxxxxxxxxx
---
apiVersion v1
kind ConfigMap
metadata
name redis-cluster-config
data
update-ip.sh
#!/bin/sh
sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${IP}/" /data/nodes.conf
exec "$@"
redis.conf +
cluster-enabled yes
cluster-config-file /data/nodes.conf
appendonly yes
We define a script that will insert an IP
value to the node.conf
file. This is to fix an issue with Redis as referenced here. We use this script every time we deploy a new Redis image.
Then, we have the redis.conf
, which applies the minimal cluster configuration.
Apply this spec into the cluster:
xxxxxxxxxx
$ kubectl apply -f redis-cluster.config.yml
Then verify that it exists in the list of configmaps
:
xxxxxxxxxx
$ kubectl get configmaps
Next, we need to define a spec for the Redis cluster instances. We can use a Deployment
or a StatefulSet
to define three (3) instances:
Here is the spec:
redis-cluster.statefulset.yml
xxxxxxxxxx
---
apiVersion apps/v1
kind StatefulSet
metadata
name redis-cluster
spec
serviceName redis-cluster
replicas6
selector
matchLabels
app redis-cluster
template
metadata
labels
app redis-cluster
spec
containers
name redis
image redis 5.0.7-alpine
ports
containerPort6379
name client
containerPort16379
name gossip
command"/conf/update-ip.sh" "redis-server" "/conf/redis.conf"
env
name IP
valueFrom
fieldRef
fieldPath status.podIP
volumeMounts
name conf
mountPath /conf
readOnlyfalse
name data
mountPath /data
readOnlyfalse
volumes
name conf
configMap
name redis-cluster-config
defaultMode0755
volumeClaimTemplates
metadata
name data
spec
accessModes "ReadWriteOnce"
resources
requests
storage 1Gi
In the above step, we defined a few things:
IP
we need in the update-ip.sh
script that we defined in the configmap
earlier. This is the pod-specific IP address using the Downward API.configmap
that we defined earlier.With this spec, we can deploy the Redis cluster instances:
xxxxxxxxxx
$ kubectl apply -f redis-cluster.statefulset.yml
Once we have the deployment ready, we need to perform the last step, which is bootstrapping the cluster. Consulting the documentation here for creating the cluster, we need to ssh
into one of the instances and run the redis-cli cluster create
command. For example, taken from the docs:
xxxxxxxxxx
$ redis-cli --cluster create \
127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 \
127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
To do that in our case, we need to get the local pod IPs of the instances and feed them to that command.
We can query the IP using this command:
xxxxxxxxxx
$ kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
So, if we save them in a variable or a file, we can pipe them at the end of the redis-cli command:
xxxxxxxxxx
$ POD_IPS=$(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
Then, we can run the following command:
xxxxxxxxxx
$ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $POD_IPS
If everything is OK, you will see the following prompt. Enter yes
to accept and continue:
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
........
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Then, we can verify the cluster state by running the cluster infocommand:
xxxxxxxxxx
$ kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:28
cluster_stats_messages_pong_sent:34
cluster_stats_messages_sent:62
cluster_stats_messages_ping_received:29
cluster_stats_messages_pong_received:28
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:62
Before we continue deploying the guestbook app, we need to offer a unified service frontend for the Redis Cluster so that it’s easily discoverable in the cluster.
Here is the service spec:
redis-cluster.service.yml
xxxxxxxxxx
---
apiVersion v1
kind Service
metadata
name redis-master
spec
type ClusterIP
ports
port6379
targetPort6379
name client
port16379
targetPort16379
name gossip
selector
app redis-cluster
We expose the cluster as redis-master
here, as the guestbook app will be looking for a host service to connect to with that name.
Once we apply this service spec, we can move on to deploying and exposing the Guestbook Application:
xxxxxxxxxx
$ kubectl apply -f redis-cluster.service.yml
The guestbook application is a simple PHP script that shows a form to submit a message. Initially, it will attempt to connect to either the redis-master
host or the redis-slave
hosts. It needs the GET_HOSTS_FROM
environment variable set pointing to the file with the following variables:
REDIS_MASTER_SERVICE_HOST
: of the masterREDIS_SLAVE_SERVICE_HOST
: of the slaveFirst, let’s define the deployment spec:
php-guestbook.deployment.yml
xxxxxxxxxx
---
apiVersion apps/v1
kind Deployment
metadata
name guestbook
spec
replicas1
selector
matchLabels
app guestbook
template
metadata
labels
app guestbook
spec
containers
name php-redis
image gcr.io/google-samples/gb-frontend v6
resources
requests
cpu 150m
memory 150Mi
env
name GET_HOSTS_FROM
value env
name REDIS_MASTER_SERVICE_HOST
value"redis-master"
name REDIS_SLAVE_SERVICE_HOST
value"redis-master"
ports
containerPort80
The code of the gb-frontend
image is located here.
Next is the the associated service spec:
xxxxxxxxxx
---
apiVersion v1
kind Service
metadata
name guestbook-lb
spec
type NodePort
ports
port80
selector
app guestbook
Note: NodePort
will assign a random port over the public IP of the Node. In either case, we get a public host:port
pair where we can inspect the application.
Here is a screenshot of the app after we deployed it:
Once we have finished experimenting with the application, we can clean up the resources and all the servers by issuing kubectl delete
statements. A convenient way is to delete using labels. For example:
xxxxxxxxxx
$ kubectl delete statefulset redis-cluster
$ kubectl delete service redis-master
$ kubectl delete deployment guestbook
$ kubectl delete service guestbook-lb
$ kubectl delete configmap redis-cluster-config
$ kubectl delete pvc -l app=redis-cluster