Get started Bring yourself up to speed with our introductory content.

Understand how to migrate a VM application to container deployment

As container deployment has become more efficient, users are looking to take their VM application to the container level. Expert Tom Nolle discusses how to make this transition.

Virtualization, to most technologists, started with hypervisors and the virtual machine -- a strategy for dividing...

a server into multiple independent hardware-like partitions. In fact, an earlier model of "soft partitions" called containers preceded VMs, and with new tools to make container deployment efficient, many VM application users are looking to switch. To make the change to containers successfully, as a VM user, you should know they are probably early adopters, understand your own application and the way VM-versus-container hosting impacts what you run, classify what you'll be migrating to guide early decisions, think particularly about decomposition of VM applications, and address the possible "gotcha" issues firmly.

The companies most interested in migrating from VMs to containers are early adopters of VMs, and this is important because most early VM applications targeted server consolidation. In that mission, VMs were important because the original server deployments lacked any middleware or even operating system version control and were almost surely monolithic applications that had minimal integration with other applications or components in the IT ecosystem. Where early VM missions didn't focus on something like server consolidation, they targeted Java (particularly J2EE) applications.

Consider the move away from a VM application

The obvious question is whether either of these situations fit your own container migration mission. If one does or both do, then you'll need to be wary of the special pitfalls of your application as you migrate. If your VM application usage is broader than both these missions, you may have faced those issues already and all you'll need to do is be consistent in your migration strategy.

Containers are a middle ground between multitasking operating system capabilities and virtual machines. They provide greater insulation of applications than multitasking does, but they still share software platform resources among the applications. That means they're more memory-efficient and you can cram more containers onto a server than you'd typically think of doing with VMs. It's very common for early VM-to-container migration plans to look at this "fit more in" as the mission, but it is not, or should not be. Tasks in a multitasking OS are, today, usually cooperating components and that's what you should be planning to host in containers. The question is how much that kind of future model should define your early migration strategies.

Comparing VMs and containers

If you have a broad mission to use containers, then simply moving monolithic VM-hosted applications to containers is a mistake. Containers are best when supporting linked components, even microservices, and so your first step in a container-driven future is to decompose monolithic applications to support a linked component model. Your second step would be to define your server pool in terms of clusters to make intercomponent connection efficient where it's needed.

The review of your target applications should look at how strongly version control is applied to the platform software, the differences in the VM characteristics you've supplied to run the apps, whether the apps are third party or in house, and whether these applications are mainstream or being phased out. Weak version control, third-party software and sidelined applications are poor candidates for transformation to container hosting in the first place. Set them aside, if possible. If not, you'll need to, at least, harmonize the platform software versions and tools before you migrate, because you'll share them in a container deployment.

Taking components seriously

The next step is the critical decomposition step. Your goal is to get monolithic applications subdivided so the components can be containerized separately. Some advocate a complete decomposition, but even if that's possible, it makes little sense to take decomposition further than its specific benefits justify. Look for applications that share components, or components of applications that can be independently scaled to improve performance, and target those for decomposition into meaningful and useful components.

It's important to take those qualifications seriously; too much componentization will threaten performance by adding network latency in too many intercomponent paths. It may also hamper your ability to frame efficient clusters for hosting your applications. The goal in container systems is to contain applications or groups of highly interreactive applications in the same cluster to improve deployment and execution efficiency.

Clusters are also the starting points for managing those nagging issues that always seem to arise. The biggest difference between VMs and containers is in the level of application isolation they provide. Everything that runs on a given container-hosting server has to be harmonized in terms of platform version and features, which is fairly easy if all your applications are clustered. If all applications in a given cluster use the same platform, which they should, then you can deploy the cluster wherever the platform software is compatible.

Security and compliance, another aspect where VMs and containers differ, are also easier to provide with cluster-based planning. All your cluster-hosting points can be networked in a secure and efficient way, leaving your container deployment free to focus on connectivity within the cluster itself.

Making networking easier is important with containers because container networking isn't as flexible as that of a VM application. You should presume that all your cluster applications coexist in a single IP subnet, and that subnet has a gateway that connects with your VPN and/or the internet. You must expose any ports that you expect to connect to from the outside in an explicit way, and you should manage the exposed IP addresses via your normal virtual private network domain name system process.

While the work associated with a VM application to a container migration is a result of their differences, some is the result of ignoring important issues in the early VM deployments. Containers demand more explicit network design, but that should have been done for VMs as well, and the same can be said for security and compliance. The big question in justifying a shift is the extent to which monolithic application models can be broken down for container deployment, and that, too, is a trend in VMs. Thus, even for those who decide not to take the container step, considering it can teach valuable lessons.

Next Steps

Should you hold off on your container deployment?

Considering the best containers for your business

The pitfalls of microservices and container adoption

This was last published in May 2017

Dig Deeper on Software containers

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How has your enterprise moved from VM applications to containers?
Cancel
2 main reasons we moved from VM's to containers
(1) our customers wanted an easy way to install/implement our technology on premise and we didn't want to have to package it for various operating systems, etc. A Docker image and short command to run is all that's needed to install and run our technology now.
(2) we rapidly develop/deploy. There is no easy way to do this with VMs unless you have time to spend writing thousands of lines of ansible scripts.

How did we do it? We started with something small. A lightweight web app to prove out the project. Once the pipeline was built around this app and we could prove it works seamlessly (with no impact to engineering), we worked to get our larger monoliths broken down to small pieces.  We made some mistakes along the way like forklifting the full monolith into a container but learned it was better to pull out small features into microservices instead.
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide.com

SearchWinDevelopment

DevOpsAgenda

Close