Kubernetes provides a lot of freedom in terms of workload deployment across development, test, and production environments. It also gives you the flexibility to deploy across data centers, cloud providers (public and private), or colocation services without changing your software. If you’re already running on Kubernetes, you may later decide you need better support or a different vendor-supported platform to expand your business. How do you migrate? How does this impact the freedom of migrating workloads across these environments on-the-fly?
Running Kubernetes on-premises means you’re on your own for managing all complexities as opposed to using Kubernetes on AWS or Azure – where your public cloud provider essentially hides all the complexities from you.
Migration may include moving from one public cloud vendor to another; from a private data center to the cloud or vice-versa; from a data center or cloud to a colocation facility; or across private data centers. It could be a wholesale, one-time migration of your application to a new environment or a dynamic and ongoing migration between environments. Regardless of target, strategy, or reason, migration requires careful consideration and you’ll benefit through the use of third-party tools and managed platforms. There are many considerations in terms of data, differences in connectivity, cloud vendors, platform or bare-metal services, and so on.
Reasons to Migrate Workloads
First, let’s explore why you would want to migrate workloads. As usual, the reasons will vary by use case and across products and companies. However, there are often common drivers:
Now that we know why, let’s look at how to migrate workloads effectively.
Making intelligent choices when it comes to software is important, and marks the difference between a good manager and an excellent one. This requires education, comparison, and collaboration, which often includes people from within your organization as well as partners and vendors. For workload migration, the considerations are very specific. Let’s explore some now.
1Where to Migrate?
Enterprises may shift workloads across public clouds, or to on-premises, bare-metal providers, and colocation services to save costs or control deployment in some way. You can get cost savings or benefits from feature sets, but there are tradeoffs. Colocation services such as Equinix helps save cloud costs. They offer other advantages when it comes to energy, cooling, and bandwidth, given it’s more affordable to scale within a hosted data center than on-premise. In short, you benefit from the colocation provider’s scale in these areas. However, you still need to purchase hardware and then stitch together hosts and storage yourself.
Bare-metal providers such as Packet.net remove the need for buying hardware, handling hardware refreshes, or maintaining the OS, like the cloud, but without the add-on platform services. Each of these trade-offs provides their own value, and the right choice depends on your application, organization policies, and IT skill sets.
Cloud vendors need to cater to a large volume of customers with a wide range of use cases across a large geographic span. As a result, they’ve perfected their network operations to ensure maximum uptime, security, and flexibility. This often means they’ve considered edge networking to bring both services and data close to their consumers; they’ve implemented border gateway protocol (BGP) solutions to optimize routing and delivery via automation to reduce overhead; they’ve ensured security for all publicly-facing resources, and have redundancy up and down the network stack to guarantee uptime. Before migrating workloads to a private data center or colocation provider, it’s important to determine how your staff will manage all of these network needs.
Kubernetes helps here by abstracting network addressing, and routing service traffic based on your cluster topology. This means you need to consider cluster-based complexity when you migrate workloads, as it can affect service routing. Kubernetes also helps when it comes to service discovery across varying network implementations.
3Data and Database Considerations
Moving data can be costly, time-consuming, risky, and bring up security issues. Bandwidth isn’t cheap, and moving large volumes of data will consume it quickly. There are a lot of variables that factor into the cost of data migration, which is why Andrew S. Tanenbaum once said, “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
However, hosting data in the cloud subjects it to unknowns in terms of personnel who can access it, the security of systems it’s stored within or that it travels through. Operational costs can also grow as cloud pricing for databases isn’t only based on usage, but also data size, management, backups, and transfers due to user activity. Backing up or restoring via the cloud can also affect availability as it can be more latent and costly than data location-resident backups.
Whether you’re spanning cloud vendors, taking a hybrid approach to workload hosting, hosting in-house within a data center, or using colocation services, a complex mix of resources requires a more complex deployment approach, the need for additional configuration, its management, and tighter version control.
A mixed application topology also affects infrastructure provisioning for both in-house users – such as developers and QA staff – as well as customers whose applications and data rely on internally-shared infrastructure.
5Ease of Deployment
One area of complexity that’s worth calling out is that of deployment. Variations in environments (spanning private and public infrastructure, for instance) and its controlling technologies (i.e., security, gateways, network differences, and so on) will need to be accounted for. Some of these environments are out of your control, adding another layer of complexity to the mix.
Clouds have sites and regions where data is copied across for ease of access and recovery. Migrating workloads on-premises likely means you need to reconsider your disaster plans. If data backups are no longer in the cloud, aren’t replicated offsite, or easily and quickly restorable, disaster recovery may not be feasible. Be sure to consider how your migration plans limit the effectiveness or eliminate the need for disaster recovery and its associated costs and risks, considering the need to keep data in multiple locations (i.e. offsite) to minimize the risk of loss.
How to Ensure a Smooth Migration
Challenges with the workload migration often include the need to drain user sessions, or running workloads from running and available Kubernetes nodes, and then removing them from the cluster without spawning new instances. This manual overhead introduces the risk of human error, potentially impacting users and risking their data. There are solutions available to help automate the migration of live Kubernetes clusters and their running workloads.
Another approach to ensure easy migration is to minimize dependencies between environments, and between pods and resources (i.e. file systems and databases) within a Kubernetes cluster and across environments. A managed Kubernetes environment helps you iterate more quickly and reliably through automation, while providing the management and monitoring needed to control your environments while reducing cost and risk.
It’s also a good idea to leverage a load-balancer, including software-based solutions such as NGINX, to dynamically migrate workloads according to load and other factors. You can also leverage traffic management software to configure and control the amount of traffic routed to each Kubernetes cluster deployed. For a more advanced solution, consider a service mesh implementation where varieties of telemetry data are used to control and dynamically change how requests are routed and workloads are distributed across your deployments.
1Operationalize your IT Management
No matter where you’re migrating to, you should aim to operationalize the deployment of software and software-defined infrastructure, networking, configuration management, and so on, all without relying on human manual effort to do so. High levels of automation are helpful regardless of the target workload environment. Consider an implementation or managed service that delivers Kubernetes and other open-source tools as a service, as opposed to something you need to buy expertise in to manage yourself.
1Manage by Workload Type – The Hybrid Cloud Approach
Workloads should also be classified according to how critical they are in order to arrive at the right on-premises and public cloud mix. Consider a hybrid model if your applications and services contain a mixture of business-critical, mission-critical, and non-critical workloads. Mission-critical workloads are those that are essential to run your business or your users’ businesses, and therefore cannot afford any downtime. Deploying these workloads on-premises allows you to control their implementation, deployment, monitoring, security, and IT response if something were to occur. Be sure to consider, however, that you need to respond to application and system events on a 24×7 schedule.
For business-critical and other workloads, some downtime may be acceptable if kept within clearly-defined boundaries. Using public cloud providers and relying on their SLAs for these workloads can reduce operational expenditure without much risk. The end result is a hybrid cloud; a mixed deployment that maximizes your resources while reducing cost, risk, and downtime.
Operational costs can add up to your largest expense when running your own data center. To manage their own costs, many public cloud vendors use commodity servers with virtualization and containerization underlying their offerings. This allows them to quickly and easily spin instances up and down, move running services across physical infrastructure, and redefine topologies on-the-fly via a software-defined data center approach.
You should plan to mimic this approach within your own data center or private cloud implementation to remain cost-effective and see the benefits. There are platforms and tools available to help you leverage “free” Linux instances running on affordable commodity servers, with minimum experience and little-to-no manual labor.
In order to make a strategic choice between public, on-premises, and hybrid cloud migrations, many factors should be considered, such as security and control, agility and flexibility, application complexity management, and operating cost reduction. As a result, operational vendors have observed that many enterprises are converging to a hybrid cloud implementation, regardless of their migration path. Regardless of where you’re coming from or going to, the intelligent design of workload distributions in a hybrid cloud needs to be made by taking into account both the cost savings achievable and the type of workload. Leveraging the right platform, such as a managed Kubernetes platform and vendor to achieve this is critical to your migration success.
- Load Balancing Multi-Cloud, Multi-Cluster Workloads in Kubernetes - January 10, 2022
- The Six Most Popular Kubernetes Networking Troubleshooting Issues - January 10, 2022
- Deploy a Platform9 Managed Kubernetes cluster and connect it to Azure Arc - January 7, 2022