This tutorial provides step by step instructions for configuring open source Rook with Ceph storage as a backend for persistent volumes created on your Kubernetes cluster.
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions (storage providers) to natively integrate with cloud-native environments.
Rook has support for multiple storage providers. It turns distributed storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management for your storage provider. When used with Kubernetes, rook uses the facilities provided by the Kubernetes scheduling and orchestration platform to provide a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.
Ceph, Cassandra and NFS are the three most popular storage providers used with Rook. Ceph is a distributed, scalable open source storage solution for block, object and shared file system storage. Ceph has evolved in the past years to become the standard for open source distributed storage solutions, with years of production deployment in mid to large sized enterprises.
In this tutorial, we will use Rook with Ceph as the persistent storage backend for our Kubernetes cluster.
We will be using a Platform9 Managed Kubernetes Kubernetes cluster for this tutorial, however you can use this tutorial to configure Rook with Ceph on any other Kubernetes cluster of your choice.
The full Rook documentation can be found here: https://rook.io/docs/rook/v1.5/ceph-quickstart.html the Rook project status can be found here: project status |
Follow the steps below to configure Rook for a minimal installation. Please note that this configuration is not recommended for production workloads.
Cluster Configuration & Access
** Storage Configuration ** One of the following storage options must be available on the cluster nodes:
A Note on LVM dependency
Rook Ceph OSDs have a dependency on the LVM package in the following scenarios:
encryptedDevice: true
in the cluster CR)metadata
device is specifiedTo avoid any issues in setting up Ceph on raw devices install LVM by running the command below
xxxxxxxxxx
sudo apt-get install -y lvm2
On your client machine clone the Rook Github repository into an empty directory using the command below.
xxxxxxxxxx
mkdir rook-single-node
cd rook-single-node
git clone --single-branch --branch release-1.5 https://github.com/rook/rook.git
This guide is using Rook release 1.5, you can find the latest release here: https://github.com/rook/rook/tree/master |
Git Clone Output
xxxxxxxxxx
Cloning into 'rook'...
remote: Enumerating objects: 211, done.
remote: Counting objects: 100% (211/211), done.
remote: Compressing objects: 100% (146/146), done.
remote: Total 58570 (delta 97), reused 114 (delta 60), pack-reused 58359
Receiving objects: 100% (58570/58570), 36.20 MiB | 850.00 KiB/s, done.
Resolving deltas: 100% (40766/40766), done.
Installing Rook is a two step process, first the CRDs and Operator are installed and then the Rook Ceph Cluster is created.
To install the CRDs and Operator run the command below:
xxxxxxxxxx
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
CRD and Operator Example Output.
xxxxxxxxxx
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
namespace/rook-ceph created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
serviceaccount/rook-ceph-admission-controller created
clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
podsecuritypolicy.policy/00-rook-privileged created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
Once the Operator and CRDs have been installed the cluster can be created by running the command below:
xxxxxxxxxx
kubectl create -f cluster-test.yaml
Example Output
xxxxxxxxxx
configmap/rook-config-override created
cephcluster.ceph.rook.io/my-cluster created
To validate the installation run the command below:
xxxxxxxxxx
kubectl -n rook-ceph get pod
The following output will be displayed:
xxxxxxxxxx
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-provisioner-7cbcfdc5b9-58f82 6/6 Running 0 10m
csi-cephfsplugin-provisioner-7cbcfdc5b9-fpxqm 0/6 Pending 0 10m
csi-cephfsplugin-qqwx5 3/3 Running 0 10m
csi-rbdplugin-277ft 3/3 Running 0 10m
csi-rbdplugin-provisioner-7675f97656-4864t 6/6 Running 0 10m
rook-ceph-mgr-a-5fc6c648b9-swxv7 1/1 Running 0 3m39s
rook-ceph-mon-a-54fd5c8c54-ln6wt 1/1 Running 0 3m52s
rook-ceph-operator-547cd645bc-xmkrs 1/1 Running 3 21m
rook-ceph-osd-prepare-10.128.130.14-2cvpv 0/1 Completed 0 3m38s
To use Rook a storage class can be created in the cluster. To create a storage class for a minimal installation use the storageclass-test.yaml
found in the Rook Github repository rook/cluster/examples/kubernetes/ceph/csi/rbd
Example Storage Class for a minimal installation
xxxxxxxxxx
kubectl apply -f storageclass-test.yaml
apiVersion ceph.rook.io/v1
kind CephBlockPool
metadata
name replicapool
namespace rook-ceph # namespace:cluster
spec
failureDomain host
replicated
size1
# Disallow setting pool with replica 1, this could lead to data loss without recovery.
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want
requireSafeReplicaSizefalse
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#targetSizeRatio: .5
---
apiVersion storage.k8s.io/v1
kind StorageClass
metadata
name rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner rook-ceph.rbd.csi.ceph.com # driver:namespace:operator
parameters
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID rook-ceph # namespace:cluster
# If you want to use erasure coded pool with RBD, you need to create
# two pools. one erasure coded and one replicated.
# You need to specify the replicated pool here in the `pool` parameter, it is
# used for the metadata of the images.
# The erasure coded pool must be set as the `dataPool` parameter below.
#dataPool: ec-data-pool
pool replicapool
# RBD image format. Defaults to "2".
imageFormat"2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures layering
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace rook-ceph # namespace:cluster
csi.storage.k8s.io/controller-expand-secret-name rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace rook-ceph # namespace:cluster
csi.storage.k8s.io/node-stage-secret-name rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace rook-ceph # namespace:cluster
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`.
csi.storage.k8s.io/fstype ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
#mounter: rbd-nbd
allowVolumeExpansiontrue
reclaimPolicy Delete
If LVM is not installed you may encounter the following error
Failed to pull image "rook/ceph:v1.5.9": rpc error: code = Unknown desc = Error response from daemon: manifest for rook/ceph:v1.5.9 not found: manifest unknown: manifest unknown
To resolve this issue install LVM on each node and then restart the server.
xxxxxxxxxx
cjones@pf9-0108 ceph % kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-547cd645bc-xmkrs 0/1 ImagePullBackOff 0 105s
Then run describe pod to view the pod events
xxxxxxxxxx
kubectl describe pod rook-ceph-operator-547cd645bc-xmkr -n rook-ceph
The Events section will document the likely cause of the issue.
xxxxxxxxxx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned rook-ceph/rook-ceph-operator-547cd645bc-xmkrs to 10.128.130.14
Normal Pulling 75s (x4 over 2m47s) kubelet, 10.128.130.14 Pulling image "rook/ceph:v1.5.9"
Warning Failed 73s (x4 over 2m46s) kubelet, 10.128.130.14 Failed to pull image "rook/ceph:v1.5.9": rpc error: code = Unknown desc = Error response from daemon: manifest for rook/ceph:v1.5.9 not found: manifest unknown: manifest unknown
Warning Failed 73s (x4 over 2m46s) kubelet, 10.128.130.14 Error: ErrImagePull
Normal BackOff 58s (x6 over 2m46s) kubelet, 10.128.130.14 Back-off pulling image "rook/ceph:v1.5.9"
Warning Failed 44s (x7 over 2m46s) kubelet, 10.128.130.14 Error: ImagePullBackOff