Platform9 3.3 release notes
Platform9 Managed Kubernetes:
1. Better Access Control
Platform9 Managed Kubernetes now has improved access control. In addition to the existing Keystone authentication, there are 2 areas where access control has been introduced
Kubernetes RBAC: This version enables Kubernetes RBAC on all clusters. Administrators can create roles and use role bindings to grant users access to resources at the cluster or namespace level. More details can be found here.
Self Service User: Administrators can now assign Self-Service User role to a user. This user can deploy and manage their workloads on a cluster (assuming they also have the appropriate Kubernetes RBAC privileges) but cannot operate on the infrastructure resources, like nodes, that belong to the cluster. You can read more about this here.
IMPORTANT: To ensure that existing apps and workloads continue to work post cluster upgrade, make sure to add appropriate RBAC permissions for the service accounts used by the apps.
IMPORTANT: Due to a limitation in Helm, the Platform9 Managed Kubernetes App Catalog runs under a service account that has complete access to the cluster. As a result, the workloads started through the App Catalog will not be bound by RBAC rules.
2. Kubernetes version upgrade to 1.8.4
In this release, Platform9 Managed Kubernetes has upgraded the Kubernetes version from 1.7.7 to 1.8.4. You can find more info on this version, along with its various features, in the blogs for Kubernetes 1.8 release and Kubernetes 1.8 features. Clusters may be upgraded to this Kubernetes version by using the “Upgrade Cluster” button in the Clusters view of the Infrastructure page of the Platform9 Clarity UI. We highly recommend users upgrade their clusters at the earliest convenience and within 15 days of the release of new Platform9 Managed Kubernetes versions. Users may need to obtain a compatible kubectl for this version if their existing kubectl is not compatible with Kubernetes 1.8.4. See Install and Set Up kubectl for more information.
IMPORTANT: With this version, the Kubernetes nodes are expected not to have any swap configured. This is according to the recommended deployment guidelines from the Kubernetes changelog. If you have swap enabled on your nodes, refer to this support article on how to disable swap on such nodes. Note that the AWS auto-deployed nodes have swap disabled by default.
3. Update to the AWS IAM policy
In this release, the Kubernetes AWS provider requires updates to the AWS capabilities granted to the AWS cloud providers’ account. Ensure this requirement is satisfied by the IAM policies for this user. You can find the latest IAM policy file How To Create a new Amazon AWS Cloud Provider for Managed Kubernetes support article.
4. Remember to clean up LoadBalancer-type Services before cluster teardown
Many users have asked us guidelines on how to cleanly delete clusters which have active Kubernetes Services running on them. It is highly recommended to use the cluster’s Kubernetes API to delete all LoadBalancer-type Services prior to deleting the cluster itself. When these Services are not deleted using the Kubernetes API, their AWS resources (LoadBalancers and SecurityGroups) can prevent Platform9 from cleanly removing other AWS resources. Any AWS resources that Platform9 cannot remove for this reason will need to be removed by the cluster administrator.
5. Bug fixes and product improvements
This release also contains a number of performance optimizations and bug-fixes that should result in a better user experience for your Platform9 cloud platform! Some significant ones are listed below
- In certain cases of deleting AWS backed clusters, deletes could fail complaining about missing gateway resource.
- Kubernetes nodes would end up with many exited containers that were not being garbage collected.
- UI would get stuck loading if any cluster in the deployment was not responsive to Kubernetes API calls.
- An AWS private network on which an existing cluster is deployed cannot be used to deploy another Kubernetes cluster.
- There are limitations when using AWS Route 53 private hosted zones with your AWS clusters:
- Private hosted zones are supported only when deploying into an existing VPC that has been associated with the hosted zone. Before using a private hosted zone, create a VPC and associate it with the hosted zone.
- Because the hosted zone is private, the API and Service FQDNs can only be resolved from within the associated VPC.
- Moving to leverage CNI for networking, one important issue is CNI’s lack of support for hostPort. Application deployments are expected to use NodePort and other service types in lieu of hostPort unless absolutely necessary (using hostNetwork becomes a must for applications needing to expose hostPorts). For more information see Kubernetes best practices.
Platform9 Managed OpenStack:
This release focuses on bug fixes, performance and stability improvements.
- Fixed Murano issue which allowed non-admin users to mark applications as public.
- Simplified first-time login experience for new users. Earlier, the process was two-step, requiring the users to enter the one-time passcode from the first email followed by password reset. Now, they can set the desired password by simply clicking the link in the activation email.
- Fixes an issue with pf9-ostackhost service startup which rendered some Nova Instances running on NFS storage inaccessible on host reboot.
- Fixed issues with RBAC on VMware causing it to not behave as expected.
- Fixed behavior of instance snapshots on vSphere. All snapshots are now private by default.
- Hypervisor role installation now makes sure that libvirt is started on system startup if it was installed as a dependency when activating the role.