In this tutorial, we will walk through the step by step process to install and configure OpenEBS as a storage backend for your Kubernetes cluster.
OpenEBS is one of the leading open-source projects for cloud-native storage for Kubernetes. OpenEBS adopts the Container Attached Storage (CAS) standard, where each workload is provided with a dedicated storage controller.
OpenEBS provides granular policies per stateful workload, high availability, native HCI on Kubernetes, and several other benefits. Since it’s open-source and built completely in userspace, it is highly portable to run across any operating system / platform and positions itself well as a cloud-native technology that allows you to build a stack that is cloud-vendor agnostic. It is also completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.
We will be using a Platform9 Managed Kubernetes Free Tier Kubernetes cluster for this tutorial, however you can use this tutorial to configure OpenEBS on any other Kubernetes cluster of your choice.
Before installing OpenEBS, you first need to installiscsirelated packages on all the Kubernetes cluster nodes. For Ubuntu 18+ nodes, here are the steps:
xxxxxxxxxxsudo apt-get updatesudo apt-get install open-iscsisudo systemctl enable --now iscsidTo ensure that iscsi services are functioning correctly, check the output of the following commands:
xxxxxxxxxxsudo cat /etc/iscsi/initiatorname.iscsisystemctl status iscsidFor CentOS/RedHat based nodes, here are the steps to install iscsi packages:
xxxxxxxxxxyum install iscsi-initiator-utils -yTo ensure that iscsi services are functioning correctly, check the output of the following commands –
xxxxxxxxxxcat /etc/iscsi/initiatorname.iscsisystemctl status iscsidFor any other operating systems, follow the steps mentioned here for each of the operating systems/cloud providers.
Step 2 - Install OpenEBS
You can either choose to deploy OpenEBS components in the default namespace on your Kubernetes cluster, or in a custom namespace created specifically for OpenEBS related pods. The latter is the recommended option, and is what we will do in this guide.
xxxxxxxxxxkubectl create ns openebsNext we will add the OpenEBS repository to our Helm 3 package manager and then deploy the associated Helm chart, all using the Helm CLI client.
xxxxxxxxxxhelm repo add openebs https://openebs.github.io/chartshelm repo updatehelm install --namespace openebs openebs openebs/openebsThis would install the OpenEBS pods with the default settings, you can modify the helm chart values by referring to the Custom Installation Mode Section
Ensure that all the pods in the openebs namespace are in a Running state
xxxxxxxxxxkubectl get pods -n openebsOutput
xxxxxxxxxxcstor-disk-pool-579u-559767cb7d-jp9t7 3/3 Running 0 6d5hcstor-disk-pool-flf6-698b9fd475-n9968 3/3 Running 0 6d5hcstor-disk-pool-t4qa-568c98dc94-vstmt 3/3 Running 0 6d5hopenebs-admission-server-66974b6ffd-87tjx 1/1 Running 0 6d5hopenebs-apiserver-6c4d9f4f9d-7smn2 1/1 Running 0 6d5hopenebs-localpv-provisioner-bcd5b8b5-ngzq4 1/1 Running 0 6d5hopenebs-ndm-mnjpp 1/1 Running 0 6d5hopenebs-ndm-operator-778f9c566-wqfp4 1/1 Running 0 6d5hopenebs-ndm-r7wgg 1/1 Running 0 6d5hopenebs-ndm-x4plz 1/1 Running 0 6d5hopenebs-provisioner-57b7dfbc88-bttqw 1/1 Running 0 6d5hopenebs-snapshot-operator-69bb776f8-kz2ss 2/2 Running 0 6d5hEnsure that default storage classes have been created.
xxxxxxxxxxkubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEopenebs-device openebs.io/local Delete WaitForFirstConsumer false 6d5hopenebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 6d5hopenebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 6d5hopenebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate falseNDM daemon set creates a Kubernetes block device custom resource for each block device that is discovered on your Kubernetes cluster node with two exceptions
Following command lists the custom resource blockdevice
xxxxxxxxxxkubectl get blockdevice -n openebsNAME NODENAME SIZE CLAIMSTATE STATUS AGEblockdevice-11468d388afb4f901a2a0be368cf4ccd 10.128.146.28 10736352768 Claimed Active 6d5hblockdevice-e925dc2fb9192244050b3109ce521216 10.128.146.106 10736352768 Claimed Active 6d5hblockdevice-ea8eec503644998e92c4159ad0dfc4ed 10.128.146.145 10736352768 Claimed Active 6d5hcStor is the recommended option to get additional workload resiliency via OpenEBS. It provides enterprise-ready features such as synchronous data replication, snapshots, clones, thin provisioning of data, high resiliency of data, data consistency and on-demand increase of capacity or performance
The core function of cStor is to provide iSCSI block storage using the locally attached disks/cloud volumes.
Additional details can be found here.
We have to provide the list of blockdevices that we acquired in the output above for creating a cStor storage pool.
A sample yaml file is already present in the ./openEBS/yaml folder of the OpenEBS repo. Please clone the repo for using it. You will have to edit the yaml file and add the blockdevices as seen in the command below:
xxxxxxxxxxkubectl get blockdevice -o jsonpath='{ range .items[*]} {.metadata.name}{"\n"}{end}'After updating the cstor.yaml with the relevant blockdevices observed in your environment, run the following command:
xxxxxxxxxxkubectl apply -f ./openEBS/yaml/cstor.yamlPoolType selected is striped in this case. The available options are striped, mirrored, raidz and raidz2.
For further information on the type of Storage Pools, please refer to the link here.
There’s an example yaml available for deploying a cStor backed storage class so you can deploy PVs/PVCs associated with it.
The replicaCount in it is set to 1 currently, but you can tweak it as per your needs. If the application handles replication itself, then its recommended to keep the replicaCount to 1.
Run the following command to deploy the StorageClass:
xxxxxxxxxxkubectl apply -f ./openEBS/yaml/cstor.yamlOnce deployed you can use the new cStor storageClass to provision Kubernetes Persistent Volumes (PVs) and the associated Persistent Volume Claims (PVCs) for deploying your application workloads.