Tutorial: Integrating OpenStack Cinder with NetApp (Cluster NFS Mode)
The NetApp unified driver for clustered Data ONTAP with NFS is a driver interface from OpenStack block storage, designed to accomplish provisioning and management of OpenStack volumes on NFS exports provided by the Data ONTAP cluster system. The NetApp unified driver for the Data ONTAP cluster does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the Data ONTAP cluster.
This tutorial describes the mechanism to enable Cinder block storage integration between Platform9 Managed OpenStack and your NetApp storage array.
For a tutorial on general Cinder integration with Platform9 refer here.
By default a Platform9 Managed OpenStack host enabled with block storage role uses Cinder driver for LVM (Linux Logical Volume manager). This example shows how to enable the NetApp driver to integrate with a NetApp storage appliance configured to use the clustered nfs model as described here.
- This guide applies to Platform9 Managed Openstack for KVM
- You have one, and only one, host with the block storage role
- You have the information necessary to connect to the storage appliance and authenticate with it
- User created for NetApp volume access has edit/create/delete permissions
- Root user is able to mount the volume using the mount command on the host (check verification of LIF IP and shares path section)
- If selinux is installed, it is in permissive mode (check selinux section)
Log in as root onto the Linux host to which you have assigned Cinder block storage role.
Change directory to /opt/pf9/etc/pf9-cindervolume-base/conf.d
Create a file named cinder_override.conf with a single [DEFAULT] section containing configuration settings specifying the name of the Cinder driver and the parameters required to connect to the storage array. This example shows a sample configuration for NetApp:[ini][DEFAULT] volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
nfs_shares_config = /opt/pf9/etc/pf9-cindervolume-base/netapp_exports
netapp_server_hostname = 10.0.0.8
netapp_vserver = MYNetAppVServer
# 80 (or 443 depending on your configuration HTTPS/HTTP) for the next value
netapp_server_port = 80
netapp_login = your_username
netapp_password = your_password[/ini]
Create a file with any name at any path with read permissions to all users, in this example we use netapp_exports and place it in /opt/pf9/etc/pf9-cindervolume-base/ (the parent of the directory containing cinder_override.conf). If a different name is used then change the nfs_shares_config value above to point to that file.
The file referenced in the nfs_shares_config configuration option should contain the NFS exports in the ip:/share format, for example:
where ip corresponds to the IP address assigned to a Data LIF (An IP address associated to a physical port in NetApp terms), and share refers to a junction path for a FlexVol volume within an SVM. Make sure that volumes corresponding to exports have read/write permissions set on the Data ONTAP controllers.
Restart the pf9-cindervolume service. For example, on Enterprise Linux operating systems e.g. CentOS:[bash]service pf9-cindervolume-base restart[/bash]
At this point, your Cinder block storage integration between Platform9 Managed OpenStack and NetApp is complete! You should be able to leverage OpenStack features such as creating volumes and snapshots, mounting volumes on virtual machines, booting virtual machines using volumes, etc. This article describes a general workflow around using OpenStack Cinder with Platform9.
Verification of LIF IP and shares path connectivity
In order for Cinder and Nova to be able to mount the shared volumes successfully, it is advised to check if the data LIF:/share is mountable as root.
(This uses nfs version 4.1)[/bash]
Verify that /mnt/ or wherever you have mounted the shared volume is accessible and write is permitted by touching a file.[bash]cd /mnt/
Note regarding SELinux
If selinux is installed on your host, then it needs to be in “permissive” mode. To check the status of selinux you can use the getenforce command. If the output does not say permissive, you will need to set it to permissive using setenforce 0 and follow this article to make the configuration persist.