Create Multi-Master Cluster
This document describes creation of a highly available multi-master on-premises BareOS Kubernetes cluster using PMK. We recommend reading What is BareOS for an understanding of BareOS and BareOS Cluster Architecture before proceeding with this document.
A highly available cluster is composed of at least 3 master nodes, each running a member of the etcd distributed database along with other Kubernetes control plane components (i.e. kube-apiserver
, kube-controller-manager
, and kube-scheduler
). We choose an odd number of master nodes so that it is possible to establish quorum within the etcd nodes and maintain fault tolerance.
To deploy a multi-master cluster a reserved IP address is required for use as the highly available API Server endpoint, this is the Cluster Virtual IP.
Tp deploy MetalLB a reserved IP range is required for provisioning IPs to Kubernetes Services.
Example: 10.128.159.240-253 as a Reserved IP range for all components, Workers, Masters, Virtual IP and MetalLB
- Master Virtual IP 10.128.159.240
- Master 01 10.128.159.241
- Master 02 10.128.159.242
- Master 03 10.128.159.243
- Workers
- Worker01 10.128.159.246
- Worker02 10.128.159.247
- Worker03 10.128.159.248
- MetalLB Range
- Starting IP 10.128.159.250 – Ending IP10.128.159.253
Create BareOS Cluster Using UI
Follow the steps given below to create a BareOS Kubernetes cluster using the PMK UI.
- Login to the UI with either your Local Credentials or Single Sign On (Enterprise).
- Select BareOS Virtual Machine or BareOS Physical Servers option depending on where you are creating your BareOS cluster.
- Click on 'Multi-master Cluster' button in the wizard
- Under the Initial Setup page of the wizard, fill in the required options using the table below. Then proceed to Next step.
Option | Description | |
---|---|---|
Cluster Settings | ||
Kubernetes Version | Select the Kubernetes Version from the list of supported Kubernetes versions. | |
Application & Container Settings | ||
Privileged Containers | Select the checkbox to enable the cluster to run privileged containers. Note that being able to run privileged containers within the cluster is a prerequisite if you wish to enable service type load balancer using MetalLB. By default a container is not allowed to access any devices on the host, but a 'privileged' container is given access to all devices on the host. For more information, see Privileged Policy Reference | |
Make Master nodes Master + Worker | Opt to schedule workloads onto the master nodes, or, deploy only the necessary control plane services and cordon the masters otherwise. | |
Cluster Add-Ons | ||
Enable ETCD Backup | Configures automated etcd backups | |
Deploy MetalLB + Layer2 Mode | MetalLB is a software load balancer that is deployed and managed by PMK. MetalLB will be automatically attached to the cluster and allow services to be deployed using the LoadBalancer service type, simplifying the steps required to make applications accessible outside of the cluster. Requirements: MetalLB requires a reserved network address range. MetalLB will manage the IP range to expose Kubernetes services Example: Starting IP 10.128.159.250 – Ending IP 10.128.159.253 | |
Monitoring | Enable Prometheus monitoring for this cluster. Learn more here In Cluster Monitoring | |
KubeVirt | Enable the cluster to support running virtual machines. Learn more here auto$ | |
Network Plugin Operator | Enable advance networking options via the PMK Luigi network operator. Learn more here Luigi Networking Quickstart | |
ETCD Backup Configuration | ||
Storage Path | The path on the master node where etcd data will be stored. Requirement: The path must be created and available on all master nodes | |
Backup Interval (Minutes) | Controls backup frequency. | |
MetalLB Configuration | ||
Address Range | IP address range for MetalLB. MetalLB will use this IP pool when allocating new instance of service type LoadBalancer for your applications. Example: Starting IP 10.128.159.250 – Ending IP 10.128.159.253 |
- The 'Master Nodes' section of the wizard will initially be empty. This is because you need to download and install the Platform9 CLI on your nodes, so they can connect to the PMK management plane and show up in your wizard.
- To achieve this, download and install the PMK CLI on at least one of your nodes by running the following command on the node.
bash <(curl -sL https://pmkft-assets.s3-us-west-1.amazonaws.com/pf9ctl_setup)
____ _ _ __ ___
| _ \| | __ _| |_ / _| ___ _ __ _ __ ___ / _ \
| |_) | |/ _` | __| |_ / _ \| '__| '_ ` _ \ (_) |
| __/| | (_| | |_| _| (_) | | | | | | | \__, |
|_| |_|\__,_|\__|_| \___/|_| |_| |_| |_| /_/
Note: SUDO access required to run Platform9 CLI.
You might be prompted for your SUDO password.
Downloading Platform9 CLI binary...
Platform9 CLI binary downloaded.
Installing Platform9 CLI...
Platform9 CLI installation completed successfully !
To start building a Kubernetes cluster execute:
pf9ctl help
The command will install the CLI
Now run the CLI config set
command to configure the CLI to connect to and use your PMK deployment.
pf9ctl config set
Platform9 Account URL: https://__ACCOUNT__.platform9.net
Username: demo@platform9.net
Password:
Region [RegionOne]: k8s
Tenant [service]: demo
✓ Stored configuration details Succesfully
Note: For PMK Enterprise users, specify the right value for the Region and Tenant within which you are creating your cluster. (Specified by the drop down selectors at the top right of your PMK UI nav bar)
Now run the CLI prep-node command to prepare your nodes with required prerequisites.
pf9ctl prep-node
The prep-node command will perform prerequisites checks on your node. If any checks fail, you will receive an output similar to the following.
✓ Loaded Config Successfully
✓ Missing package(s) installed successfully
✓ Removal of existing CLI
✓ Existing Platform9 Packages Check
✓ Required OS Packages Check
✓ SudoCheck
✓ CPUCheck
x DiskCheck - At least 30 GB of total disk space and 15 GB of free space is needed on host. Disk Space found: 2 GB
x MemoryCheck - At least 12 GB of memory is needed on host. Total memory found: 4 GB
✓ PortCheck
✓ Existing Kubernetes Cluster Check
✓ Completed Pre-Requisite Checks successfully
Optional pre-requisite check(s) failed. Do you want to continue? (y/n)
It is highly recommended that you meet all of the optional pre-requisites or else you may experience degraded performance among scheduled pods and/or other unforeseen issues.
If you encounter the error: Failure to prepare node
, please review the pf9ctl
log file for additional context.
Enterprise – Please submit a Support Request with the log attached and our team will review and work with you to onboard the node.
Failed to prepare node. See /root/pf9/log/pf9ctl-20210330.log or use --verbose for logs
Once the prep-node command succeeds, you will see output similar to below. Your current node is now prepared with the required software packages.
✓ Loaded Config Successfully
✓ Missing package(s) installed successfully
✓ Removal of existing CLI
✓ Existing Platform9 Packages Check
✓ Required OS Packages Check
✓ SudoCheck
✓ CPUCheck
✓ DiskCheck
✓ MemoryCheck
✓ PortCheck
✓ Existing Kubernetes Cluster Check
✓ Completed Pre-Requisite Checks successfully
✓ Disabled swap and removed swap in fstab
✓ Hostagent installed successfully
✓ Initialised host successfully
✓ Host successfully attached to the Platform9 control-plane
The CLI can also be used to prepare other remote nodes, as long as the node that the CLI runs on can connect to those nodes via SSH. To accomplish this, you will need to specify the SSH username and password to connect to the remote nodes and a list of IP addresses for the remote nodes to be prepared using the -i option. The following example shows running the CLI to prepare the current node and two other remote nodes. In these examples, all the remote nodes have the same username and password for SSH access.
pf9ctl cluster prep-node -u testuser -p testpassword -s ~/.ssh/id_rsa -i localhost -i 150.20.7.65 -i 150.20.7.66
- Once you prepare all nodes using the CLI, these nodes will show under the
Nodes
tab in the PMK UI. - Switch to your cluster creation wizard and proceed to the Next step. You should now see the nodes you just prepared as available to select as master nodes.
- Select at least 3 nodes as master nodes for your cluster. Then proceed to the Next step.
A multi-master cluster requires at least 3 master nodes. If you are yet to authorize at least 3 nodes, you will be unable to proceed until additional nodes have been authorized.
- Under the Worker Nodes step of the wizard, select one or more worker nodes for your cluster. Then proceed to Next step.
- Under the Network step, configure networking for your cluster based on the table below. Then proceed to Next step.
Field | Value | |
---|---|---|
Cluster API FQDN | ||
API FQDN | The FQDN (DNS Name) that is to be used to access the Kubernetes cluster API server from outside of the cluster. | |
Cluster Virtual IP Configuration | ||
Virtual IP Address for Cluster | The reserved IP address (or high availability floating IP address) with which the user will access the cluster. See Multi-Master Architecture | |
Physical Interface for Virtual IP Association | The network interface to which the virtual IP gets associated. Ensure that the virtual IP specified above is accessible on this network interface, and that all master nodes use the same interface name for the interface to be associated with the virtual IP. | |
Cluster Networking Range & HTTP Proxy | ||
Containers CIDR | The IP range that Kubernetes uses to configure the Pods (containers) deployed by Kubernetes. | |
Services CIDR | The IP range that Kubernetes uses to configure services deployed by Kubernetes | |
HTTP Proxy | If your on-premises network uses an http proxy, please specify the details here. | |
Cluster CNI | ||
Network Backend | The CNI networking backend to be used for this cluster. | |
IP in IP Encapsulation Mode (Calico) | Encapsulation mode for Calico CNI. See Calico CNI for a better understanding of the advance parameters. | |
Interface Detection Method (Calico) | Advance networking options for Calico CNI. | |
NAT Outgoing (Calico) | NAT mode for Calico CNI |
- Under the Advanced Step, enter the advanced configuration details for your cluster based on the table below.
You must have an in-depth knowledge of the Kubernetes API to be able to correctly use the Advanced API configuration option. If the advanced APIs are inappropriately configured, it could lead to the cluster working incorrectly or being inaccessible.
Field | Option | |
---|---|---|
Advanced API Configuration | ||
Default API groups and versions | Select the Default API groups and versions option to enable on the cluster, the default APIs based on the Kubernetes installation in your environment | |
All API groups and versions | Select All API groups and versions option to enable on the cluster, all alpha, beta, and GA versions of Kubernetes APIs that have been published till date. | |
Custom API groups and versions | Select Custom API groups and versions option to specify one
or more API versions that you wish to enable and/or disable. Enter the
API versions in the text area following the Custom API groups and
versions option.
For example, to enable Kubernetes v1 APIs, enter the expression,api/v1=true .
Similarly, to disable Kubernetes v2 APIs, enter the expression, api/v2=false .
If you want to enable and/or disable multiple versions, you could enter comma-separated expressions, such as, api/v2=false,api/v1=true . |
- Optionally add any metadata tags to your cluster.
- Review the cluster details.
- Click
Complete
to finalize and deploy the cluster!
You can now deploy your applications on the highly available multi-master Kubernetes cluster.
Create BareOS Cluster Using REST API
For advanced users, you can automate the process of creating a multi-master BareOS Kubernetes cluster by integrating with our Qbert API.