In this article, you will learn how to set up Rook to manage Ceph within Kubernetes using a free Platform9 Managed Kubernetes account.
Rook is an open-source, cloud-native solution, that delivers production-ready management for file, block, and Object storage.
Rook is a set of storage Operators for Kubernetes that turn distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates tasks such as deployment, configuration, scaling, upgrading, monitoring, resource management for distributed storage like Ceph on top of Kubernetes. It has support for multiple storage providers like Ceph, EdgeFS, Cassandra, NFS, Yugabyte DB, and CockroachDB – via a Kubernetes Operator for each one.
With Rook, you can automate resource management, scale and converge your storage clusters, distribute and replicate data to minimize data loss, optimize workloads on commodity hardware, and enable elastic storage in your data center.
To get started, create your free Platform9 Kubernetes account by entering your email below, then select the ‘Deploy Now’ option. Once you have created and verified your account, follow the steps outlined below.
Step 1: Create your free account
We have tested Rook with the following configuration on the cluster:
- Platform9 Freedom Plan (a free tier account is required) with three worker nodes and one Master node
- Each worker node should have at least one free unformatted disk of size 10GiB attached to it.
- MetalLB load balancer configured on bare metal cluster for enabling optional dashboard.
- Flannel or Calico for CNI.
- Worker node size: 2VPUs x 8GB Memory (4VPU x 16GB recommended)
- Master node size: 2VCPU x 8GB Memory (4VPU x 16GB recommended)
- ‘lvm2’ is required on Ubuntu 16.04. Ubuntu 18.04 comes pre installed with lvm2.
Note: There may be additional prerequisites for CentOS.
Deploying Rook To Manage Ceph With the Platform9 Managed Kubernetes App Catalog
Now that we have created an account and deployed a cluster we can move on to the App section of the UI. In the App section you will have access to our App Catalog. The App Catalog allows us to add Helm repositories and deploy applications based on the Helm Charts in the added repositories.
Select Apps from the sidebar navigation. This will bring up the App Catalog section. After a fresh deployment this section will be empty as we have not yet added a Helm Repository. We will need to add a repository to populate this section with options.
Select Repositories. In this section you can add public or private Helm Repositories. Select
+Add New Repository.
Add the Rook Helm Chart repository. We are pulling the Helm Chart from Rook.io.
Now that we have added a repository we can navigate back to the App Catalog section and start deploying apps.
Fill in the required information. In our example we are deploying “rook-ceph” on our cluster “rook-ceph” in the namespace “rook-ceph.” In our example we leave the Values section set to the defaults. If you have different needs in production then this is where you can tune the Rook Operator options. Select
After the deployment has finished we will end up back at the App Catalog section. Next we will deploy the
rook-ceph-cluster App, which will configure our cluster. Select
In this section we deploy our
rook-ceph-cluster on the
rook-cluster in the namespace
rook-ceph. We select the latest version of the chart available, which at the time of this writing is
v1.7.4. In the values file we are updating the ToolBox option on line 195 to enable the Toolbox, change this value from
Now that we have deployed both Apps, we can view and edit them in the Deployed Apps section. Select Deployed Apps, then select the Namespace where we deployed the Apps.
If we navigate to the Storage Class section in the side bar Navigation we can view the Storage Classes that were configured by default.
The applications are deployed using Helm. If you have Helm installed you can view information about the deployments via the CLI. Here is an example listing of our Deployments in the
$ helm list -n rook-ceph NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION rook-ceph rook-ceph 1 2021-09-29 23:06:17.539511574 +0000 UTC deployed rook-ceph-v1.7.4 rook-ceph-cluster rook-ceph 1 2021-09-29 23:09:45.486059181 +0000 UTC deployed rook-ceph-cluster-v1.7.4
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph status cluster: id: 9cc7902a-f1bd-493a-a65c-b199297f9546 health: HEALTH_WARN mon a is low on available space services: mon: 3 daemons, quorum a,b,c (age 19m) mgr: a(active, since 17m) mds: 1/1 daemons up, 1 hot standby osd: 3 osds: 3 up (since 18m), 3 in (since 19m) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 11 pools, 192 pgs objects: 246 objects, 12 KiB usage: 85 MiB used, 300 GiB / 300 GiB avail pgs: 192 active+clean io: client: 1.3 KiB/s rd, 170 B/s wr, 2 op/s rd, 0 op/s wr
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.29306 root default -7 0.09769 host 192-168-86-72 0 hdd 0.09769 osd.0 up 1.00000 1.00000 -3 0.09769 host 192-168-86-73 1 hdd 0.09769 osd.1 up 1.00000 1.00000 -5 0.09769 host 192-168-86-74 2 hdd 0.09769 osd.2 up 1.00000 1.00000
[root@rook-ceph-tools-96c99fbf-n8h7v /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 192.168.86.72 28.2M 99.9G 0 0 2 105 exists,up 1 192.168.86.73 28.3M 99.9G 0 0 1 0 exists,up 2 192.168.86.74 28.3M 99.9G 0 0 0 0 exists,up
Create a test PVC from the StorageClass
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: ceph-block
Validate the PVC is bound to a PV from rook-ceph-block StorageClass
$ k get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-436f8ba8-f31f-4a46-b265-761fb74d9ec1 5Gi RWO Delete Bound default/rbd-pvc ceph-block 2s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/rbd-pvc Bound pvc-436f8ba8-f31f-4a46-b265-761fb74d9ec1 5Gi RWO ceph-block 4s
For more reads like this one, visit our blog page or subscribe for up-to-date news, projects, and related content in real-time.
- How To Set Up OpenEBS on Kubernetes Using Platform9 - December 22, 2020
- How To Set Up Linkerd as a Kubernetes Service Mesh - December 22, 2020
- How To Set Up Rook To Manage Ceph Within Kubernetes - December 22, 2020