How To Set Up OpenEBS on Kubernetes Using Platform9

OpenEBS is one of the leading open-source projects for cloud-native storage on Kubernetes. OpenEBS adopts the Container Attached Storage (CAS) approach, where each workload is provided with a dedicated storage controller.

OpenEBS provides granular policies per stateful workload, high availability, native HCI on Kubernetes, and several other benefits. Since it’s open-source and built completely in userspace, it is highly portable to run across any OS/platform and positions itself well as a cloud-native technology that allows you to build a stack that is cloud-vendor agnostic. It is also completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.

Getting Started

To get started, create your free Platform9 Kubernetes account by entering your email below, then select the ‘Deploy Now’ option. Once you have created and verified your account, follow the steps outlined below.

Prerequisites

Here are the prerequisites needed for installation of OpenEBS –

  • Working Helm 3 installation link.
  • Install iscsi related packages on all the nodes.
  1. For Ubuntu OS, here are the steps –
Bash
Copy

To ensure that iscsi services are functioning correctly, check the output of the following commands –

Bash
Copy
  1. For Centos/RedHat OS, here are the steps to install iscsi packages –

yum install iscsi-initiator-utils -y

To ensure that iscsi services are functioning correctly, check the output of the following commands –

Bash
Copy

For any other operating systems, follow the steps mentioned for each of the operating systems/cloud providers.

  • Ensure that you have cluster admin-context before proceeding to Installation steps.
  • The cluster should be configured to run the containers in Privileged mode.

In a PMKFT environment, all you need to do is select the Privileged mode checkBox while creating a cluster from UI.

Installation OpenEBS on Kubernetes

You can either choose to deploy OpenEBS components in the default namespace or in a custom namespace specifically for OpenEBS related pods etc. The latter is the recommended option.

Create openebs namespace (Optional)

kubectl create ns openebs

Add the openEBS repo and then deploy the associated Helm chart.

Bash
Copy

This would install the openEBS pods with the default settings, you can modify the helm chart values by referring this link (Custom Installation Mode Section)

Verifying installation

  • Ensure that all the pods in openebs namespace are in a Running state

kubectl get pods -n openebs

Eg. Output

Bash
Copy
  • Ensure that default storage classes have been created –
Bash
Copy

(Skipping the default storage class output)

  • NDM daemon set creates a block device CR for each block devices that is discovered on the node with two exceptions
  1. The disks that match the exclusions in ‘vendor-filter’ and ‘path-filter’
  2. The disks that are already mounted in the node

Following command lists the custom resource blockdevice

Bash
Copy

cStor

Background: cStor is the recommended option to get additional workload resiliency via OpenEBS. It provides enterprise-ready features such as synchronous data replication, snapshots, clones, thin provisioning of data, high resiliency of data, data consistency and on-demand increase of capacity or performance

The core function of cStor is to provide iSCSI block storage using the locally attached disks/cloud volumes.

Additional details can be found here.

Deploy cStor Pools and the associated Storage Class.

## You have to provide the list of blockdevices seen in the above output for creating a cStor storage pool.

A sample yaml file is already present in the ./openEBS/yaml folder of this repo. Please clone the repo for using it. You’ll have to edit the yaml file and add the blockdevices as seen in the command below –

kubectl get blockdevice -o jsonpath='{ range .items[*]} {.metadata.name}{"\n"}{end}'

After updating the cstor.yaml with the relevant blockdevices observed in your environment, run the following command –

kubectl apply -f ./openEBS/yaml/cstor.yaml

PoolType selected is striped in this case. The available options are striped, mirrored, raidz and raidz2.

For further information on the type of Storage Pools, please refer the link here.

There’s an example yaml available for deploying a cStor backed storage class so you can deploy PVs/PVCs associated with it.

The replicaCount in it is set to 1 currently, but you can tweak it as per your needs. If the application handles replication itself, then its recommended to keep the replicaCount to 1.

Run the following command to deploy the StorageClass –

kubectl apply -f ./openEBS/yaml/cstor.yaml

Once deployed you can use the new cstor storageClass to provision PVs and associated PVCs for deploying application workloads.

References

https://docs.openebs.io/docs/next/features.html

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated