Create Multi-Master Cluster

This document describes creation of a highly available multi-master on-premises BareOS Kubernetes cluster using PMK. We recommend reading What is BareOS for an understanding of BareOS and BareOS Cluster Architecture before proceeding with this document.

A highly available cluster is composed of at least 3 master nodes, each running a member of the etcd distributed database along with other Kubernetes control plane components (i.e. kube-apiserver, kube-controller-manager, and kube-scheduler). We choose an odd number of master nodes so that it is possible to establish quorum within the etcd nodes and maintain fault tolerance.

To deploy a multi-master cluster a reserved IP address is required for use as the highly available API Server endpoint, this is the Cluster Virtual IP.

Tp deploy MetalLB a reserved IP range is required for provisioning IPs to Kubernetes Services.

Example: 10.128.159.240-253 as a Reserved IP range for all components, Workers, Masters, Virtual IP and MetalLB

  • Master Virtual IP 10.128.159.240
  • Master 01 10.128.159.241
  • Master 02 10.128.159.242
  • Master 03 10.128.159.243
  • Workers
  • Worker01 10.128.159.246
  • Worker02 10.128.159.247
  • Worker03 10.128.159.248
  • MetalLB Range
  • Starting IP 10.128.159.250 – Ending IP10.128.159.253

Create BareOS Cluster Using UI

Follow the steps given below to create a BareOS Kubernetes cluster using the PMK UI.

  • Login to the UI with either your Local Credentials or Single Sign On (Enterprise).
  • Select BareOS Virtual Machine or BareOS Physical Servers option depending on where you are creating your BareOS cluster.
  • Click on 'Multi-master Cluster' button in the wizard
  • Under the Initial Setup page of the wizard, fill in the required options using the table below. Then proceed to Next step.
OptionDescription
Cluster Settings
Kubernetes VersionSelect the Kubernetes Version from the list of supported Kubernetes versions.
Application & Container Settings
Privileged ContainersSelect the checkbox to enable the cluster to run privileged containers. Note that being able to run privileged containers within the cluster is a prerequisite if you wish to enable service type load balancer using MetalLB. By default a container is not allowed to access any devices on the host, but a 'privileged' container is given access to all devices on the host. For more information, see Privileged Policy Reference
Make Master nodes Master + WorkerOpt to schedule workloads onto the master nodes, or, deploy only the necessary control plane services and cordon the masters otherwise.
Cluster Add-Ons
Enable ETCD BackupConfigures automated etcd backups
Deploy MetalLB + Layer2 Mode

MetalLB is a software load balancer that is deployed and managed by PMK. MetalLB will be automatically attached to the cluster and allow services to be deployed using the LoadBalancer service type, simplifying the steps required to make applications accessible outside of the cluster.

Requirements: MetalLB requires a reserved network address range. MetalLB will manage the IP range to expose Kubernetes services

Example: Starting IP 10.128.159.250 – Ending IP 10.128.159.253

MonitoringEnable Prometheus monitoring for this cluster. Learn more here In Cluster Monitoring
KubeVirtEnable the cluster to support running virtual machines. Learn more here auto$
Network Plugin OperatorEnable advance networking options via the PMK Luigi network operator. Learn more here Luigi Networking Quickstart
ETCD Backup Configuration
Storage Path

The path on the master node where etcd data will be stored.

Requirement: The path must be created and available on all master nodes

Backup Interval (Minutes)Controls backup frequency.
MetalLB Configuration
Address Range

IP address range for MetalLB. MetalLB will use this IP pool when allocating new instance of service type LoadBalancer for your applications.

Example: Starting IP 10.128.159.250 – Ending IP 10.128.159.253

  • The 'Master Nodes' section of the wizard will initially be empty. This is because you need to download and install the Platform9 CLI on your nodes, so they can connect to the PMK management plane and show up in your wizard.
  • To achieve this, download and install the PMK CLI on at least one of your nodes by running the following command on the node.
Download CLI Setup Script
Copy
CLI Setup
Copy

The command will install the CLI

Now run the CLI config set command to configure the CLI to connect to and use your PMK deployment.

CLI Configuration
Copy
CLI Configuration Example
Copy

Note: For PMK Enterprise users, specify the right value for the Region and Tenant within which you are creating your cluster. (Specified by the drop down selectors at the top right of your PMK UI nav bar)

Now run the CLI prep-node command to prepare your nodes with required prerequisites.

Prepare Node
Copy

The prep-node command will perform prerequisites checks on your node. If any checks fail, you will receive an output similar to the following.

Prepare Node - Pre-Requisite Check(s) Failed
Copy

It is highly recommended that you meet all of the optional pre-requisites or else you may experience degraded performance among scheduled pods and/or other unforeseen issues.

If you encounter the error: Failure to prepare node, please review the pf9ctl log file for additional context.

Enterprise – Please submit a Support Request with the log attached and our team will review and work with you to onboard the node.

Bash
Copy

Once the prep-node command succeeds, you will see output similar to below. Your current node is now prepared with the required software packages.

Bash
Copy

The CLI can also be used to prepare other remote nodes, as long as the node that the CLI runs on can connect to those nodes via SSH. To accomplish this, you will need to specify the SSH username and password to connect to the remote nodes and a list of IP addresses for the remote nodes to be prepared using the -i option. The following example shows running the CLI to prepare the current node and two other remote nodes. In these examples, all the remote nodes have the same username and password for SSH access.

Bash
Copy
  • Once you prepare all nodes using the CLI, these nodes will show under the Nodes tab in the PMK UI.
  • Switch to your cluster creation wizard and proceed to the Next step. You should now see the nodes you just prepared as available to select as master nodes.
  • Select at least 3 nodes as master nodes for your cluster. Then proceed to the Next step.

A multi-master cluster requires at least 3 master nodes. If you are yet to authorize at least 3 nodes, you will be unable to proceed until additional nodes have been authorized.

  • Under the Worker Nodes step of the wizard, select one or more worker nodes for your cluster. Then proceed to Next step.
  • Under the Network step, configure networking for your cluster based on the table below. Then proceed to Next step.
FieldValue
Cluster API FQDN
API FQDNThe FQDN (DNS Name) that is to be used to access the Kubernetes cluster API server from outside of the cluster.
Cluster Virtual IP Configuration
Virtual IP Address for Cluster

Required for a multi-master cluster.

The reserved IP address (or high availability floating IP address) with which the user will access the cluster. See Multi-Master Architecture

Physical Interface for Virtual IP Association

Required for a multi-master cluster

The network interface to which the virtual IP gets associated. Ensure that the virtual IP specified above is accessible on this network interface, and that all master nodes use the same interface name for the interface to be associated with the virtual IP.

Cluster Networking Range & HTTP Proxy
Containers CIDR

Required

The IP range that Kubernetes uses to configure the Pods (containers) deployed by Kubernetes.

Services CIDR

Required

The IP range that Kubernetes uses to configure services deployed by Kubernetes

HTTP ProxyIf your on-premises network uses an http proxy, please specify the details here.
Cluster CNI
Network BackendThe CNI networking backend to be used for this cluster.
IP in IP Encapsulation Mode (Calico)Encapsulation mode for Calico CNI. See Calico CNI for a better understanding of the advance parameters.
Interface Detection Method (Calico)Advance networking options for Calico CNI.
NAT Outgoing (Calico)NAT mode for Calico CNI
  • Under the Advanced Step, enter the advanced configuration details for your cluster based on the table below.

You must have an in-depth knowledge of the Kubernetes API to be able to correctly use the Advanced API configuration option. If the advanced APIs are inappropriately configured, it could lead to the cluster working incorrectly or being inaccessible.

FieldOption
Advanced API Configuration
Default API groups and versionsSelect the Default API groups and versions option to enable on the cluster, the default APIs based on the Kubernetes installation in your environment
All API groups and versionsSelect All API groups and versions option to enable on the cluster, all alpha, beta, and GA versions of Kubernetes APIs that have been published till date.
Custom API groups and versionsSelect Custom API groups and versions option to specify one or more API versions that you wish to enable and/or disable. Enter the API versions in the text area following the Custom API groups and versions option. For example, to enable Kubernetes v1 APIs, enter the expression,api/v1=true. Similarly, to disable Kubernetes v2 APIs, enter the expression, api/v2=false. If you want to enable and/or disable multiple versions, you could enter comma-separated expressions, such as, api/v2=false,api/v1=true.
  • Optionally add any metadata tags to your cluster.
  • Review the cluster details.
  • Click Complete to finalize and deploy the cluster!

You can now deploy your applications on the highly available multi-master Kubernetes cluster.

Create BareOS Cluster Using REST API

For advanced users, you can automate the process of creating a multi-master BareOS Kubernetes cluster by integrating with our Qbert API.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated