5 Challenges to Going Cloud-Native – and How to Solve Them

We’re living in a cloud-native world. You can barely read a tech blog or go to a conference without hearing about all of the benefits of cloud-native technologies or architectures, such as containers, microservices and serverless functions.

Yet amidst all of the excitement about going cloud native, it can be easy to overlook the challenges that arise when you migrate from legacy, monolithic applications to a cloud-native strategy. These challenges can be overcome, but only if you address them as part of your cloud-native migration strategy.

To that end, let’s take a look at five of the most common cloud-native challenges, along with strategies for overcoming them.

What is cloud-native?

First, though, a word about what cloud-native actually means.

With the hype around all-things ‘cloud’, ‘cloud-native’ is sometimes used by folks to mean any type of technology or strategy that they consider modern. From that perspective, cloud-native ends up as just another relatively meaningless buzzword.

On the other hand, when invested with specific and limited meaning, cloud-native is a useful term and concept. We like the CNCF’s definition, which emphasizes “loosely coupled systems” and resiliency as characteristics of cloud-native computing. The CNCF definition also points to a specific and limited list of technologies and architectures — “containers, service meshes, microservices, immutable infrastructure, and declarative APIs” — as examples of cloud-native technologies.

For the purposes of this article, we’ll be sticking to the CNCF’s definition of cloud-native. Now, let’s discuss the specific challenges that arise when you use technologies and strategies like those described above.

Challenges to Adopting Cloud Native

1Persistent data storage

One common challenge with many cloud-native technologies is storing data persistently. Containers, serverless functions and applications deployed using an immutable infrastructure model do not typically have a way to store data permanently inside themselves because all internal data is destroyed when the application shuts down.

Solving this challenge requires rethinking approaches to data storage by decoupling it from applications and host environments. Instead of storing data within the application environment, cloud-native workflows store it externally and expose the data as a service. Then, workloads that need to access the data, connect to it just as they would connect to any other service.

This approach — which is enabled by various tools, like volumes in Kubernetes — has two benefits. In addition to enabling persistent data storage for applications that are not themselves designed to be persistent, it also makes it easy to share a single storage pool among multiple applications or services.

2Service integration

Cloud-native applications are typically composed of a set of disparate services. This distributed nature is what helps make them scalable and flexible, as compared to monoliths.

But it also means that cloud-native workloads have many more moving pieces that need to be connected together seamlessly in order to achieve success.

In part, service integration is an issue for developers to address as they build cloud-native apps. They must ensure that each service is properly sized; a best practice is to create a distinct service for each type of functionality within a workload, rather than trying to make a single service do multiple things. It’s also important to avoid adding services just because you can. Before you introduce more complexity to your app in the form of another service, make sure that the service advances a particular goal.

Beyond the architecture of the application itself, effective service integration also depends on choosing the right deployment techniques. Containers are probably the most obvious way to deploy multiple services and unify them into a single workload, but in some cases, serverless functions, or non-containerized apps connected by APIs, might be better methods of service deployment.

3Management and monitoring

The more services you have running as part of an application, the more difficult it becomes to monitor and manage them. This is true, due not just to the sheer number of services that you have to track, but also because guaranteeing application health requires monitoring relationships between services, not just services themselves.

Successfully monitoring and managing services in a cloud-native environment, then, requires an approach that can predict how a failure in one service will impact others, as well as understand which failures are most critical. Dynamic baselining, which means replacing static thresholds with ones that continually reassess applications environments in order to determine what is normal and what is an anomaly, is also critical.

4Avoiding cloud lock-in

Lock-in risks are not unique to the cloud; they can arise from almost any type of technology, and they have been a threat to agility for decades. However, in the case of cloud-native applications or architectures, the threat of becoming too dependent on a particular cloud provider or service can be especially great, due to the ease with which workloads can be deployed in such a way that they require a particular service from a particular cloud.

Fortunately, mitigating this cloud lock-in risk is easy enough as long as you plan ahead. Sticking to community-based standards (like those promoted by the OCCI) will do much to ensure that you can move your workloads easily from one cloud to another. Likewise, as you plan which cloud services you will use to go cloud-native, consider whether any of the services you are considering have features that are truly unique and not available from other clouds. If they do, avoid those features, because they can lock you in.

For example, the specific languages and frameworks supported by the serverless computing platforms of various public clouds vary somewhat. AWS Lambda supports Go, for example, but Azure does not. For that reason, you’d be wise to avoid writing your serverless functions in Go. Even if you plan to use AWS to host them initially, this dependency would make it difficult to migrate to a different cloud in the future. Stick with a language like Java, which you can safely bet will be supported everywhere.

5Building cloud-native delivery pipelines apps

By definition, cloud-native apps run in the cloud. While the cloud could be either the public cloud, or private, on-prem or hybrid cloud runnnig on your organization’s environments- it does mean immutable infrastructure and cloud management processes. But many application delivery pipelines still run largely on traditional on-premises environments that may not have been cloudy-fied or are clunky when integrated with applications and services running on public clouds or on containers.

This creates a challenge in several respects. One is that deploying code from a local environment to an on-premises can introduce delays. Another is that developing and testing locally makes it harder to emulate production conditions, which can lead to unexpected application behavior, post-deployment.

The most effective way to overcome these hurdles is to move your CI/CD pipeline into a cloud environment- not only to benefit from immutable infrastructure and the cloud’s scalability and other benefits, but also to mimic production conditions and bring your pipeline closer – as much as possible – to your apps. That way, code is written closer to where it is deployed, making deployment faster. It also becomes easier to spin up test environments that are identical to production ones.

While development that is completely cloud-based is not for everyone and some developers prefer the familiarity and responsiveness of local IDEs over cloud-based ones,  try to ensure your CI/CD pipelines are running on a cloud environment, to the extent possible.

Conclusion

No matter how you spin it, going cloud-native is difficult. Compared to legacy applications, cloud-native applications are more complex and have many more places where things can go wrong. That said, cloud-native computing challenges can be overcome — and implementing strategies that can address the challenges is key to unlocking the agility, reliability, and scalability that only cloud-native architectures can deliver.

Platform9

You may also enjoy

Kubernetes FinOps: Elastic Machine Pool(EMP) Step-by-Step guide : Part 1

By Joe Thompson

Top 5 considerations for migrating off VMware

By Chris Jones

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now