How To Set Up Rook To Manage Ceph Within Kubernetes

In this article, you will learn how to set up Rook to manage Ceph within Kubernetes using a free Platform9 Managed Kubernetes account.

Rook is an open-source, cloud-native solution, that delivers production-ready management for file, block, and Object storage.

Rook is a set of storage Operators for Kubernetes that turn distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates tasks such as deployment, configuration, scaling, upgrading, monitoring, resource management for distributed storage like Ceph on top of Kubernetes. It has support for multiple storage providers like Ceph, EdgeFS, Cassandra, NFS, Yugabyte DB, and CockroachDB – via a Kubernetes Operator for each one.

With Rook, you can automate resource management, scale and converge your storage clusters, distribute and replicate data to minimize data loss, optimize workloads on commodity hardware, and enable elastic storage in your data center.

Getting Started

To get started, create your free Platform9 Kubernetes account by entering your email below, then select the ‘Deploy Now’ option. Once you have created and verified your account, follow the steps outlined below. 

Step 1: Create your free account

Step 1: Create your free account


We have tested Rook with the following configuration on the cluster:

  1. Platform9 Freedom Plan (a free tier account is required) with three worker nodes and one Master node
  2. Each worker node should have at least one free unformatted disk of size 10GiB attached to it.
  3. MetalLB load balancer configured on bare metal cluster for enabling optional dashboard.
  4. Flannel or Calico for CNI.
  5. Worker node size: 2VPUs x 8GB Memory (4VPU x 16GB recommended)
  6. Master node size: 2VCPU x 8GB Memory (4VPU x 16GB recommended)
  7. ‘lvm2’ is required on Ubuntu 16.04. Ubuntu 18.04 comes pre installed with lvm2.

Note: There may be additional prerequisites for CentOS.

Deploying Rook To Manage Ceph With the Platform9 Managed Kubernetes App Catalog

Now that we have created an account and deployed a cluster we can move on to the App section of the UI. In the App section you will have access to our App Catalog. The App Catalog allows us to add Helm repositories and deploy applications based on the Helm Charts in the added repositories.

Select Apps from the sidebar navigation. This will bring up the App Catalog section. After a fresh deployment this section will be empty as we have not yet added a Helm Repository. We will need to add a repository to populate this section with options.

Select Repositories. In this section you can add public or private Helm Repositories. Select +Add New Repository.

Add the Rook Helm Chart repository. We are pulling the Helm Chart from

Now that we have added a repository we can navigate back to the App Catalog section and start deploying apps.

Fill in the required information. In our example we are deploying “rook-ceph” on our cluster “rook-ceph” in the namespace “rook-ceph.” In our example we leave the Values section set to the defaults. If you have different needs in production then this is where you can tune the Rook Operator options. Select Deploy.

After the deployment has finished we will end up back at the App Catalog section. Next we will deploy the rook-ceph-cluster App, which will configure our cluster. Select Deploy.

In this section we deploy our rook-ceph-cluster on the rook-cluster in the namespace rook-ceph. We select the latest version of the chart available, which at the time of this writing is v1.7.4. In the values file we are updating the ToolBox option on line 195 to enable the Toolbox, change this value from false to true. Select Deploy.

Now that we have deployed both Apps, we can view and edit them in the Deployed Apps section. Select Deployed Apps, then select the Namespace where we deployed the Apps.

If we navigate to the Storage Class section in the side bar Navigation we can view the Storage Classes that were configured by default.

The applications are deployed using Helm. If you have Helm installed you can view information about the deployments via the CLI. Here is an example listing of our Deployments in the rook-ceph namespace.

$ helm list -n rook-ceph
NAME                NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                       APP VERSION
rook-ceph           rook-ceph   1           2021-09-29 23:06:17.539511574 +0000 UTC deployed    rook-ceph-v1.7.4                       
rook-ceph-cluster   rook-ceph   1           2021-09-29 23:09:45.486059181 +0000 UTC deployed    rook-ceph-cluster-v1.7.4               
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph status
    id:     9cc7902a-f1bd-493a-a65c-b199297f9546
    health: HEALTH_WARN
            mon a is low on available space
    mon: 3 daemons, quorum a,b,c (age 19m)
    mgr: a(active, since 17m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 18m), 3 in (since 19m)
    rgw: 1 daemon active (1 hosts, 1 zones)
    volumes: 1/1 healthy
    pools:   11 pools, 192 pgs
    objects: 246 objects, 12 KiB
    usage:   85 MiB used, 300 GiB / 300 GiB avail
    pgs:     192 active+clean
    client:   1.3 KiB/s rd, 170 B/s wr, 2 op/s rd, 0 op/s wr
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph osd tree
-1         0.29306  root default                                     
-7         0.09769      host 192-168-86-72                           
 0    hdd  0.09769          osd.0               up   1.00000  1.00000
-3         0.09769      host 192-168-86-73                           
 1    hdd  0.09769          osd.1               up   1.00000  1.00000
-5         0.09769      host 192-168-86-74                           
 2    hdd  0.09769          osd.2               up   1.00000  1.00000
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph osd status
 0  28.2M  99.9G      0        0       2      105   exists,up  
 1  28.3M  99.9G      0        0       1        0   exists,up  
 2  28.3M  99.9G      0        0       0        0   exists,up 

Create a test PVC from the StorageClass

apiVersion: v1
kind: PersistentVolumeClaim
  name: rbd-pvc
  - ReadWriteOnce
      storage: 5Gi
  storageClassName: ceph-block

Validate the PVC is bound to a PV from rook-ceph-block StorageClass

$ k get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
persistentvolume/pvc-436f8ba8-f31f-4a46-b265-761fb74d9ec1   5Gi        RWO            Delete           Bound    default/rbd-pvc   ceph-block              2s

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/rbd-pvc   Bound    pvc-436f8ba8-f31f-4a46-b265-761fb74d9ec1   5Gi        RWO            ceph-block     4s

Next Steps

For more reads like this one, visit our blog page or subscribe for up-to-date news, projects, and related content in real-time.   

You may also enjoy

Introducing Arlon! Your Kubernetes Cluster Management Assembly Line

By Madhura Maskasky

MayaData Kubera Persistent Storage through MayaData Director for Platform9 Managed Kubernetes clusters

By Surendra Nimje

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

GigaOM’s Radar report for Managed KubernetesRead Now