v1.0
Managed Kubernetes
K8s Basics
PMK Onboarding
Networking / Ingress
Data Services
Automating PMK
Troubleshooting

How to Deploy Rook with Ceph as a Storage Backend for your Kubernetes Cluster using CSI

This tutorial provides step by step instructions for configuring open source Rook with Ceph storage as a backend for persistent volumes created on your Kubernetes cluster.

Background

Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions (storage providers) to natively integrate with cloud-native environments.

Rook has support for multiple storage providers. It turns distributed storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management for your storage provider. When used with Kubernetes, rook uses the facilities provided by the Kubernetes scheduling and orchestration platform to provide a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.

Ceph, Cassandra and NFS are the three most popular storage providers used with Rook. Ceph is a distributed, scalable open source storage solution for block, object and shared file system storage. Ceph has evolved in the past years to become the standard for open source distributed storage solutions, with years of production deployment in mid to large sized enterprises.

In this tutorial, we will use Rook with Ceph as the persistent storage backend for our Kubernetes cluster.

We will be using a Platform9 Managed Kubernetes Kubernetes cluster for this tutorial, however you can use this tutorial to configure Rook with Ceph on any other Kubernetes cluster of your choice.

The full Rook documentation can be found here: https://rook.io/docs/rook/v1.5/ceph-quickstart.html the Rook project status can be found here: project status

Prerequisites

Follow the steps below to configure Rook for a minimal installation. Please note that this configuration is not recommended for production workloads.

Cluster Configuration & Access

  • A Kubernetes cluster.
  • A Kubectl installation on a machine that has access to your Kubernetes cluster, configured with the cluster as the primary cluster.
  • Ability to clone the Rook GitHub repository on the same machine that has kubectl installed
  • For a Test Deployment - A single node cluster or a single master and single worker cluster, each node in the cluster requires a mounted unformatted volume, and LVM must be installed, allow workloads on Masters: Enabled, CNI: Calico

** Storage Configuration ** One of the following storage options must be available on the cluster nodes:

  1. Raw devices (no partitions or formatted filesystems)
  2. Raw partitions (no formatted filesystem)

A Note on LVM dependency

Rook Ceph OSDs have a dependency on the LVM package in the following scenarios:

  • OSDs are created on raw devices or partitions
  • If encryption is enabled (encryptedDevice: true in the cluster CR)
  • A metadata device is specified

Step 1 - Install LVM

To avoid any issues in setting up Ceph on raw devices install LVM by running the command below

Ubuntu
CentOS
Copy

Step 2 - Clone Rook Github Repo

On your client machine clone the Rook Github repository into an empty directory using the command below.

Bash
Copy

This guide is using Rook release 1.5, you can find the latest release here: https://github.com/rook/rook/tree/master

Git Clone Output

Bash
Copy

Step 3 - Install Rook

Installing Rook is a two step process, first the CRDs and Operator are installed and then the Rook Ceph Cluster is created.

To install the CRDs and Operator run the command below:

Bash
Copy

CRD and Operator Example Output.

Bash
Copy

Once the Operator and CRDs have been installed the cluster can be created by running the command below:

Bash
Copy

Example Output

Bash
Copy

To validate the installation run the command below:

Bash
Copy

The following output will be displayed:

Bash
Copy

Step 4 - Storage Class

To use Rook a storage class can be created in the cluster. To create a storage class for a minimal installation use the storageclass-test.yamlfound in the Rook Github repository rook/cluster/examples/kubernetes/ceph/csi/rbd

Example Storage Class for a minimal installation

Bash
Copy
storageclass-test.yaml
Copy

Known Error

Failed to pull image

If LVM is not installed you may encounter the following error

Failed to pull image "rook/ceph:v1.5.9": rpc error: code = Unknown desc = Error response from daemon: manifest for rook/ceph:v1.5.9 not found: manifest unknown: manifest unknown

Resolution

To resolve this issue install LVM on each node and then restart the server.

How to Diagnose

Bash
Copy

Then run describe pod to view the pod events

Bash
Copy

The Events section will document the likely cause of the issue.

Bash
Copy

Video Content

  Last updated