Red Hat OpenShift
VMware Tanzu Kubernetes Grid
Provisioning of Kubernetes Clusters

Fully automated provisioning of clusters

Fully automated provisioning of clusters
High Availability and Healing

- The default HAProxy load balancer can be used to create a multi-master and multi-etcd cluster environment – with etcd nodes either forming their own cluster or deployed on the same node as the master

- Uses Kubernetes Cluster API to detect and correct failed nodes
- Supports multiple masters and automated failover between masters
Deployment Model(s) Supported

- Public cloud (OpenShift Online)
- SaaS-managed (OpenShift Dedicated)
- Hybrid cloud (OpenShift Container Platform)

- Can be deployed on-premises or in all the major public clouds
- Supports clusters running on public clouds, vSphere and certain bare-metal infrastructure
- Control plane can be hosted on-premises or in the public cloud
Breadth of Operating Systems Supported

Works only with Red Hat Enterprise Linux (a RHEL subscription is required and bundled into OpenShift)

- Supports all popular enterprise Linux distributions –
- Red Hat,
- CentOS,
- Ubuntu,
- Amazon Linux,
- Photon OS
Monitoring and Operations Management

- Diagnostic tools via command line for health statistics
- Prometheus and Grafana for environment health monitoring and visualization

- No built-in monitoring integrations, but compatible with Prometheus and other Grafana
- Traditional support ticketing process for issues
Cluster Upgrades

- Can be automated with Ansible playbooks, or performed manually

- Uses Kubernetes Cluster API to automate upgrades
Multi-cluster Management

- A typical deployment creates a single Kubernetes cluster that is designed to scale up to 2000 nodes and 120,000 pods
- All users of that deployment are expected to share that single cluster and achieve isolation via a combination of Kubernetes namespaces, and OpenShift multi-tenancy
- Starting with OpenShift 4, multiple clusters can be managed through Red Hat’s hybrid cloud console

- Supports multi-cluster management and configuration
- Clusters can span a range of on-premises or multi-cloud infrastructure
Multi-tenancy, Role-based Access Control, and Single Sign-on Support

- Delivers multi-tenancy through projects, called Kubernetes namespaces
- Kubernetes RBAC is utilized to define granular access policies for users
- There is no cross cluster multi-tenancy

- Extends Kubernetes RBAC with additional roles
- Users and groups can be managed through VMware Cloud Services
- Single-sign not available by default but can be set up using a plugin
Private Registry Support and Image Management

- Relies primarily on built-in OpenShift registry. Can be used with third-party registries such as Docker Hub, but images must be imported manually on the command line

- No built-in private registry. Primarily designed for integration with private registries through VMware Harbor
- Non-VMware registries also supported
Hybrid Cloud Integrations and APIs

- OpenShift Container Platform supports deployment on hybrid and multi-cloud environment. However, all infrastructure must be provisioned with RHEL

- Supports hybrid infrastructure built using a range of public clouds, private data centers and operating systems
- Strong integration with other VMware products and APIs
- Heat templates available for deploying Tanzu on top of VMware Integrated OpenStack
User Interface and Experience

- Provides a native UI that enables management of your Kubernetes resources and the catalog

- Provides a native UI (Mission Control) that offers management and monitoring
Support for automated application deployments

- Application lifecycles can be managed through either OpenShift Ansible Broker or application templates (the latter support Rails, Django, Node.js, CakePHP, and Dancer)

- Built-in application catalog that is populated with public Helm chart applications
- Compatible with Open Service Broker API for deploying services
Production Grade Service Level Agreement

- 99.5% uptime for fully-managed clusters (OpenShift Dedicated)
- Troubleshooting is handled via support tickets
- Customers drive manual upgrades and any issues require support team engagement

- Provides a traditional enterprise class support model. Guaranteed response times depend on incident severity (determined by customer) and support plan tier
- Troubleshooting is handled via support tickets
- Customers drive manual upgrades and any issues require support team engagement
Ease of Setup, Installation, Continuous Use, Management, and Maintenance

- Installing and configuring OpenShift is a manual process that is Ansible-based
- Several Ansible playbooks are required during the installation

- Requires setup of multiple tools. Manual setup and configuration process
Networking Support and Integrations

- OpenShift provides CNI support and can integrate with any CNI based SDN
- By default OpenShift SDN is deployed, which configures an overlay network using Open vSwitch (OVS)
- Out-of-the-box third-party CNI plugins supported: Flannel, Nuage and Kuryr

- Designed for use with Antrea, a CNI-compatible plugin based on Open vSwitch
Storage Support and Integrations

- Supports integration with network-based persistent storage using the Kubernetes persistent volume framework
- Supports a wide variety of persistent storage endpoints such as NFS, GlusterFS, OpenStack Cinder, FlexVolume, VMware vSphere etc.

- “Opinionated” storage solution through integration with vSphere via Project Pacific
Self-Service Provisioning

- Provides a self-service UI (OpenShift Web Console) that is separate from the default Kubernetes dashboard UI to enable self-service for developers and administrators

- Provides a self-service UI (Mission Control) for managing workloads and policies across clusters
Support for CI/CD integrations

- Pipelines and build strategies simplify the creation and automation of dev/test and production pipelines
- Ships out-of-the-box with a Jenkins build strategy and client plugin to create a Jenkins pipeline. However, the setup to create and configure production pipelines is manual and time-consuming
- The pipeline build configuration creates a Jenkins master pod (if one doesn’t exist) and then automatically creates slave pods to scale jobs & assign different pods for jobs with different runtimes

- Designed especially for integration with VMware Concourse CI/CD
- Also compatible with most major third-party CI/CD toolchains (Jenkins, GitLab, etc.)