Challenges and Tips for Taking Legacy Systems to the Cloud
Recently, I had the opportunity to chat with Alan Shimmel from DevOps.com about some of the challenges and tips for cloud-ifying legacy applications and on-prem infrastructure.
For many modern companies, growth often means growing through acquisition. As organizations become larger, it often makes sense to acquire existing companies who are already creating the solutions they need than try to build them from scratch. However, this can often create a variety of conflicts and challenges—one of which is around infrastructure. For other companies that have grown more organically, it often can be difficult to modernize their legacy systems and bring their technology stack into the 21st century.
But cloud comes with big benefits: Some studies estimate a cloud-based infrastructure can reduce IT overhead costs by as much as 40 percent while also giving IT leaders a flexible way to scale up and down as needed. It also lends itself well for automation tasks, helping free valuable development time to add meaningful value to applications. And, the cloud isn’t just for tech startups. For example, a recent McKinsey & Company study noted that financial services companies have the highest percentage of server images deployed in private or public clouds, approaching nearly 100 percent (versus a median adoption rate of 19 percent). Cloud is certainly a hot topic, and for good reason.
In an ideal world, the IT landscape would be a greenfield, where every organization could start with a blank slate and build out their optimum infrastructure. But for most businesses, it is often more of a brownfield—muddied with old systems and stodgy practices—and it’s not always clear the best way to build on top of it. As more and more organizations look to a hybrid cloud approach as the infrastructure mode of the future, the need for a way to migrate legacy systems more easily is becoming an increasing necessity.
One of the ways to approach this is through the concept of normalization. This is a focus on utilizing industry-leading open source stack technologies that can create a hybrid cloud solution that is consistent and just works immediately: Kubernetes for containers, Fission for serverless functions and OpenStack for VMs, for example. As anyone in IT knows, this goal of normalization is nothing new, but achieving it can go a long way in helping your organization move into the cloud.
The reason behind this focus on providing a solution that leverages open source technology is the goal of creating this normalized structure that works across different cloud providers. This approach is because all the big three cloud providers have their own special services: Google has TPUs, Amazon has Lambda and Azure has Dynamics CRM. As you move data centers into the public cloud, you still have islands of infrastructure that are all built and run differently. How can DevOps truly have a consistent way of building and operating software in this sort of atmosphere? Open source technologies are a very powerful way to normalize that because they provide a solution that plugs into any infrastructure that you want.
One of the other keys to help bring your legacy systems into the cloud is to embrace change as it comes. While this is obviously a less technical recommendation than the above, it is still an important component to achieving success. It is no surprise that this sort of approach is one of the main tenets of the DevOps community that dates back to the beginning of the philosophy. It really started with the Puppets and Chefs of the world in the late 2000s and then continued with the rise of the OpenStack community. Finally, containers really started to form, from Vagrant and now to Kubernetes. Along this route, the IT industry has become much more sophisticated and there are a greater number of opportunities to achieve the normalization necessary to ease the transition to cloud—but only if you’re flexible and willing to adapt to change.
A related important distinction is to understand that complexity exists, and it’s not feasible to eliminate that complexity. Rather, you can take an approach of subdividing it and trying to contain and limit complexity. The strategy of normalization allows you to do this and keep complex parts of your infrastructure isolated as much as possible.
While there are many approaches to moving legacy systems into the cloud—from rehosting (a lift-and-shift approach) to refactoring (completely re-imagining how the application is architected and developed)—turning your current infrastructure into a cloud by leveraging existing and powerful open source tools is one of the most effective approaches that extract the most value for many organizations. This hybrid cloud approach gives organizations the ability to not only leverage cutting-edge cloud solutions that are agnostic to provider, but also provides a real way for companies to continue utilizing legacy systems that still provide meaningful value for their applications. This flexible, bespoke approach to hybrid cloud enables an entirely new approach to software infrastructure and opens many new doors for sustained success.
Here’s a full discussion on this topic with Alan Shimel, editor in chief of DevOps.com, posted on DevOpsTV:
This article originally appeared on DevOps.com
- Edge Computing and 5G Power: Telco and Cloud Convergence - February 26, 2021
- VMware Tanzu and Project Pacific: 3 Considerations for Enterprises Adopting Kubernetes - August 27, 2019
- Modernize: Five Pathways To Level Up Your Apps And Infrastructure - June 20, 2019