IPv6 Support
This article describes how IPV6 can be enabled for a PMK cluster. This is available as an advanced option in the cluster creation wizard.
PMK has support for creating Kubernetes clusters with either IPV4 or IPV6 enabled. PMK does not currently support dual stack installations.
Installation Environment
NAT64, DNS64
In order to use the managed service which is IPV4 and Kubernetes installation environment which will be IPV6 or dual stack it is recommended to:
- Have a NAT64 setup ready.
- Have a DNS64 setup ready.
Additionally, you would also need a working DNS server that can resolve the host names. This is required for commands like kubectl logs
and kubectl exec
to work.
HTTP, HTTPS_PROXY
As an alternative a HTTP/HTTPS Proxy can also be used instead of the NAT64. While most of the Platform9 products take HTTP, HTTPS proxy as configuration parameters. There are few components that may not have that capability and rely on ‘system’ settings to setup the proxy setting.
- Ensure that you have http proxy server configured which has dual stack (Both Ipv4 and IPv6) configured.
- If the proxy server URL is specified as IPv6 address, installer would error-out while parsing the proxy URL. To avoid this please make sure you have an appropriate DNS address for the proxy. Alternatively please update the /etc/hosts file on the host machine and add an entry like:
fc00:1:a:1:f816:3eff:fec5:2ca7 ipv6.proxy.com
- To ensure that all the PF9 services has the proxy setting configured, please add the following entries in the
/etc/profile
file:
http_proxy=http://ipv6.proxy.com:3128
https_proxy=http://ipv6.proxy.com:3128
- If the pf9-hostagent is unable to download packages and you observe following errors - \
pf9_app_cache.py ERROR - Downloading https://s3-us-west-1.amazonaws.com/package-repo.platform9.com/host-packages/pf9-kube/4.5.0-10274/pf9-kube-4.5.0-10274.x86_64.rpm failed: HTTPSConnectionPool(host='s3-us-west-1.amazonaws.com', port=443): Max retries exceeded with url: /package-repo.platform9.com/host-packages/pf9-kube/4.5.0-10274/pf9-kube-4.5.0-10274.x86_64.rpm (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f0ad3011d30>: Failed to establish a new connection: [Errno 110] Connection timed out',))
Please set the http_proxy and https_proxy in the file /opt/pf9/hostagent/pf9-hostagent.env as well followed by running the command -
systemctl restart pf9-hostagent \
Installation
Cluster Creation
PMK cluster can be created using an API and UI. At the moment of writing this document, the UI support for creating an IPV6 cluster is NOT present and will be available in the next patch of the product.
API can be used to create the cluster with IPV6 addresses for both services as well as container CIDRs. Instead of using tools like curl, Platform9 API can be invoked using a python client called ‘qbertclient’ available on pypi. The supported version is python3 for the qbert client.
Here are some important parameter restrictions:
- Ipv6: this is the most important parameter (valid values are 0, 1 or false/true) this triggers the cluster components to use I addressing for various Kubernetes components like CoreDNS, KubeProxy, Canal, API server etc.
- networkplugin: calico: While platform9 supports many network plugins, for the purposes of IPV6 only calico is supported.
- containersCidr & servicesCidr: Please specify the ipv6 cidr when setting the ipv6=1. Please make sure the cidr specified is between _/112 - /123. _For example fd00:101::/64 is an invalid value but fd00:101::/112 is acceptable.
- _privileged: _Please make sure this is set to true (or 1)
- There are other parameters that you can control as well, though setting ipv6 will set default values for these parameters these includes:calicoIPv6,calicoIPv6PoolNatOutgoing, calicoIPv6PoolBlockSize
- Coming soon:
- Support for IPv6 on
metallbCidr, masterIp, masterVipIpv6, "calicoIPv4DetectionMethod": "first-found", "calicoIPv6DetectionMethod": "first-found"
The following is an example script on the creation of a PMK cluster that uses IPV6.
from qbertclient import qbert
from qbertclient import keystone
du_fqdn = "du-test-kplane-asriraman-3501.platform9.horse"
username = "whoever@test-kplane-asriraman-3501.com"
password= "some password, please substitute"
project_name = "service"
keystone = keystone.Keystone(du_fqdn, username, password, project_name)
token = keystone.get_token()
project_id = keystone.get_project_id(project_name)
qb = qbert.Qbert(token, "https://{}/qbert/v3/{}".format(du_fqdn, project_id))
print(qb.list_clusters())
print(qb.list_cloud_providers())
node_pool_uuid = None
for item in qb.list_nodepools():
print("Nodepool item")
print(item)
if item['name'] == 'defaultPool':
node_pool_uuid = item['uuid']
container_cidr = "fd00:101::/112"
svc_cidr ="fd00:102::/112"
req_body = {
"name":"asr-ipv6",
"containersCidr": container_cidr,
"servicesCidr": svc_cidr,
"privileged":True,
"appCatalogEnabled":False,
"allowWorkloadsOnMaster":True,
"nodePoolUuid": node_pool_uuid,
"numMasters":1,
"isPrivate":False,
"numWorkers":1,
"networkPlugin": "calico",
"mtuSize": "1400",
"ipv6": True
}
out = qb.create_cluster(req_body)
print(out)
print(qb.list_clusters())
Some other optional parameters to try if you want more control on the way Calico works
"calicoIPv6PoolCidr": container_cidr,
"calicoIPv6": "autodetect",
"calicoIPv4": "none",
"felixIPv6Support": True,
"calicoIPv6PoolNatOutgoing": True,
"calicoIPv6PoolBlockSize": 116,
Host Side Installation
Once the cluster definition is created using the process mentioned above. Please add nodes to the cluster definition above. This can be done in either one of the two ways. (Remember to go through the proxy configuration if proxy is needed for the outgoing traffic)
Important Prerequisite
Make sure ipv6 forwarding is turned on all the nodes.
sudo sysctl net.ipv6.conf.all.forwarding=1
1. pf9ctl
Download pf9ctl, the link the UI doesn’t work across DNS64 due to a redirection issue with the URL shortener, please use the following URL:
https://raw.githubusercontent.com/platform9/express-cli/master/cli-setup.sh
Once downloaded please run it and fill in the values:
- Account URL
- Username, Password
- Region Name
- Tenant Name
Followed by
pf9ctl cluster prep-node
And attaching your cluster
Master:
pf9ctl cluster attach-node -m localhost <cluster-name>
Worker:
pf9ctl cluster attach-node -w localhost <cluster-name>
2. Using hostagent installer
To use the hostagent installer, follow these step
Download the installer from Platform9
curl -O https://<account_fqdn>/clarity/platform9-install-[debian|redhat].sh.
Replace the right ACCOUNT FQDN as well as the OS platform (use redhat for centos).
Invoke the certless installer for no proxy setup
If you have a proxy setup, invoke the installer as follows Run
Now go to the UI and add this node to the pre-created cluster
Either of the methods would work and can be used to create the cluster, it will take few minutes before the node(s) are ready. You can check the cluster pods getting ipv6 addresses.
$ ./kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-79cdfb4bb5-mk8dv 1/1 Running 5 6m32s fd00:101::2001 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
calico-node-bmw45 1/1 Running 0 6m31s fc00:1:a:1:f816:3eff:fe66:6373 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
calico-typha-7f89fb5f8d-6vlws 0/1 Pending 0 6m32s <none> <none> <none> <none>
calico-typha-7f89fb5f8d-b5jw8 1/1 Running 0 6m32s fc00:1:a:1:f816:3eff:fe66:6373 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
calico-typha-7f89fb5f8d-l8vls 0/1 Pending 0 6m32s <none> <none> <none> <none>
coredns-5b985c544f-ktn49 1/1 Running 0 7m52s fd00:101::2005 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
k8s-master-fc00-1-a-1-f816-3eff-fe66-6373 3/3 Running 0 39m fc00:1:a:1:f816:3eff:fe66:6373 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
metrics-server-v0.3.6-75cdf48d5b-t58sh 2/2 Running 0 69s fd00:101::2006 fc00-1-a-1-f816-3eff-fe66-6373 <none> <none>
Persist Hostname in /etc/hosts
Once the host installation is complete, please persist the node-name as shown by the hostname
command to /etc/hosts
.
TBD:
- Describe the use of multicast instead of broadcast for API server HA
Known Issues
Proxy configuration: Proxy configuration needs to be done through a dual process of both setting at the host level as well as passing the configuration to installers.
The DNS entries for all hosts is a must and must be distinct. This is a big departure from the IPV4 where Platform9 would use the IPV4 address as the identifier of the hosts. In IPV6 hostnames are used and the right configuration is critical to the cluster operation.
MetalLB does not work in BGP mode for IPv6. This is currently a limitation with MetalLB, and is under active development.
When using the Luigi network operator, the kernel the hosts run on need to be 4.1+. Please refer to the how-to guide for Luigi for more information.