How To Set Up OpenEBS on Kubernetes Using Platform9

In this article, you will learn how to set up OpenEBS on Kubernetes clusters, as a storage solution for your deployment.

OpenEBS is one of the leading open-source projects for cloud-native storage on Kubernetes. OpenEBS adopts the Container Attached Storage (CAS) approach, where each workload is provided with a dedicated storage controller.

OpenEBS provides granular policies per stateful workload, high availability, native HCI on Kubernetes, and several other benefits. Since it’s open-source and built completely in userspace, it is highly portable to run across any OS/platform and positions itself well as a cloud-native technology that allows you to build a stack that is cloud-vendor agnostic. It is also completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.

Prerequisites

Here are the prerequisites needed for installation of OpenEBS –

  • A working Kubernetes Cluster.
  • Working Helm 3 installation link.
  • Install iscsi related packages on all the nodes.
  1. For Ubuntu OS, here are the steps –
sudo apt-get update
sudo apt-get install open-iscsi
sudo systemctl enable --now iscsid

To ensure that iscsi services are functioning correctly, check the output of the following commands –

sudo cat /etc/iscsi/initiatorname.iscsi
systemctl status iscsid
  1. For Centos/RedHat OS, here are the steps to install iscsi packages –
yum install iscsi-initiator-utils -y

To ensure that iscsi services are functioning correctly, check the output of the following commands –

cat /etc/iscsi/initiatorname.iscsi
systemctl status iscsid

For any other operating systems, follow the steps mentioned for each of the operating systems/cloud providers.

  • Ensure that you have cluster admin-context before proceeding to Installation steps.
  • The cluster should be configured to run the containers in Privileged mode.

In a PMK environment, all you need to do is select the Privileged mode checkBox while creating a cluster from UI

Fig 1. Privileged Option in Cluster Settings

OpenEBS

  • Disks that would form the storage Pool are mounted on the worker nodes. It’s recommended to have a homogenous setup as far as possible in terms of Disk size, no. of disks etc.

Installation OpenEBS on Kubernetes

You can either choose to deploy OpenEBS components in the default namespace or in a custom namespace specifically for OpenEBS related pods etc. The latter is the recommended option.

  1. Create openebs namespace (Optional)
kubectl create ns openebs
  1. Add the openEBS repo and then deploy the associated Helm chart.
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs openebs openebs/openebs

This would install the openEBS pods with the default settings, you can modify the helm chart values by referring this link (Custom Installation Mode Section)

Verifying installation

  • Ensure that all the pods in openebs namespace are in a Running state
kubectl get pods -n openebs

Eg. Output

cstor-disk-pool-579u-559767cb7d-jp9t7               3/3     Running   0          6d5h
cstor-disk-pool-flf6-698b9fd475-n9968               3/3     Running   0          6d5h
cstor-disk-pool-t4qa-568c98dc94-vstmt               3/3     Running   0          6d5h
openebs-admission-server-66974b6ffd-87tjx           1/1     Running   0          6d5h
openebs-apiserver-6c4d9f4f9d-7smn2                  1/1     Running   0          6d5h
openebs-localpv-provisioner-bcd5b8b5-ngzq4          1/1     Running   0          6d5h
openebs-ndm-mnjpp                                   1/1     Running   0          6d5h
openebs-ndm-operator-778f9c566-wqfp4                1/1     Running   0          6d5h
openebs-ndm-r7wgg                                   1/1     Running   0          6d5h
openebs-ndm-x4plz                                   1/1     Running   0          6d5h
openebs-provisioner-57b7dfbc88-bttqw                1/1     Running   0          6d5h
openebs-snapshot-operator-69bb776f8-kz2ss           2/2     Running   0          6d5h
  • Ensure that default storage classes have been created –
kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  6d5h
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  6d5h
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  6d5h
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false

(Skipping the default storage class output)

  • NDM daemon set creates a block device CR for each block devices that is discovered on the node with two exceptions
  1. The disks that match the exclusions in ‘vendor-filter’ and ‘path-filter’
  2. The disks that are already mounted in the node

Following command lists the custom resource blockdevice –

kubectl get blockdevice -n openebs
NAME                                           NODENAME         SIZE          CLAIMSTATE   STATUS   AGE
blockdevice-11468d388afb4f901a2a0be368cf4ccd   10.128.146.28    10736352768   Claimed      Active   6d5h
blockdevice-e925dc2fb9192244050b3109ce521216   10.128.146.106   10736352768   Claimed      Active   6d5h
blockdevice-ea8eec503644998e92c4159ad0dfc4ed   10.128.146.145   10736352768   Claimed      Active   6d5h

cStor

Background: cStor is the recommended option to get additional workload resiliency via OpenEBS. It provides enterprise-ready features such as synchronous data replication, snapshots, clones, thin provisioning of data, high resiliency of data, data consistency and on-demand increase of capacity or performance

The core function of cStor is to provide iSCSI block storage using the locally attached disks/cloud volumes.

Additional details can be found here.

Deploy cStor Pools and the associated Storage Class.

You have to provide the list of blockdevices seen in the above output for creating a cStor storage pool.

A sample yaml file is already present in the ./openEBS/yaml folder of this repo. Please clone the repo for using it. You’ll have to edit the yaml file and add the blockdevices as seen in the command below –

kubectl get blockdevice -o jsonpath='{ range .items[*]} {.metadata.name}{"\n"}{end}'

After updating the cstor.yaml with the relevant blockdevices observed in your environment, run the following command –

kubectl apply -f ./openEBS/yaml/cstor.yaml

PoolType selected is striped in this case. The available options are striped, mirrored, raidz and raidz2.

For further information on the type of Storage Pools, please refer the link here.

There’s an example yaml available for deploying a cStor backed storage class so you can deploy PVs/PVCs associated with it.

The replicaCount in it is set to 1 currently, but you can tweak it as per your needs. If the application handles replication itself, then its recommended to keep the replicaCount to 1.

Run the following command to deploy the StorageClass –

kubectl apply -f ./openEBS/yaml/cstor.yaml

Once deployed you can use the new cstor storageClass to provision PVs and associated PVCs for deploying application workloads.

References

https://docs.openebs.io/docs/next/features.html

Next Steps

In this blog, we walked through a tutorial on how to set up OpenEBS on Kubernetes. We hope you found this blog informative and engaging. For more reads like this one, visit our blog page or subscribe for up-to-date news, projects, and related content in real-time.

You may also enjoy

Kubernetes FinOps: Elastic Machine Pool(EMP) Step-by-Step guide : Part 1

By Joe Thompson

Kubernetes FinOps: Elastic Machine Pool Step-by-Step guide : Part 2

By Joe Thompson

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now