Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Microservices and containers present a new deployment model in 2017

While 2016 proved that microservices are a great fit for cloud, the deployment part is tricky. Here's how combining microservices and containers will help DevOps teams in 2017.

Applications in the cloud break all the traditional rules; they can move around failures, scale when workloads change, and their components can be mixed and matched to speed development and improve deployment efficiency. None of these benefits are automatic, though. We learned in 2016 that the same old application architectures running in the cloud end up running the same old way. To maximize cloud benefits, you need to change application models and optimize the cloud to suit your changes. Microservices combined with container deployment promise just that for 2017.

An alternative to SOA

For almost two decades now, it's been standard development practice to divide applications into components. SOA started that trend by using remote procedure call technology to network-distribute application pieces while retaining control over component access. Many believe that SOA has become bloated, though, and while SOA applications can be mixed and matched in the cloud, they are often difficult to move and restart under failure conditions, and usually don't scale well with workload changes. The limitations are particularly frustrating given that web applications have nearly all the properties that the cloud needs for growth.

The union of microservices and containers can break down nearly all the barriers to optimized cloud use.

Microservices seem to offer a solution to the problem. A microservice looks much like a web API in that it can be accessed with simple HTTP or JSON interfaces. Best practices for microservice design discourage writing stateful logic that can't be moved or scaled because some data is stored in the application between uses. From the beginning, interest in microservices has exploded and user trials followed. However, not all of those trials were successful.

The problem with microservices

One problem with microservices in traditional cloud deployments is the latency associated with accessing them. Every microservice is an inquiry-response combination, and if the microservice is accessed frequently in the course of doing work, the delays that accumulate can seriously impact user response time and productivity. This is an even greater problem if the microservices are brokered through an API management tool since the tool introduces an extra hop between the microservice user and the microservice.

Another problem that can hurt microservice adoption is resource waste. Microservices are typically small, far smaller than traditional application components. When deployed on virtual machines (VMs), the operating system and middleware needed to run those microservices can make up over 90% of the machine image, and even then the machine images themselves are often much smaller than usual. Most companies size their VMs to support typical machine images, and microservices use up a large amount of those resources.

Combining microservices and containers

The union of microservices and containers can break down nearly all the barriers to optimized cloud use. Microservices embody the scalability, reusability and resiliency features encouraged by the cloud, and containers can solve most of the resource efficiency problems. The value of deploying microservices in the cloud was recognized in 2016, but the value of joining microservices and containers is going to become clear for the first time in 2017.

Veteran Java middleware consultant Mike Croft discusses the legitimate reasons organizations are and should be shy about microservices.

Containers differ from VMs in that the applications running in containers share an operating system and much of the middleware. Eliminating the duplication of these big software elements allows many more microservices to run on a single server: five to even 10 times as many in routine deployments. Some users report they can run 30 times as many microservices with containers versus their VMs.

Microservices also deploy faster in containers than on VMs. That can be significantly useful during horizontal scaling of services with load or when a microservice must be redeployed due to a network or server failure. In fact, they can even be deployed on demand without generating an unacceptable performance impact.

The power of the swarm

Efficiency in resources and deployment isn't the only benefit containers bring to microservices, and probably not the ones that will drive their adoption in 2017. Container-clouds are built from individual hosts, combined into clusters and networked on a larger scale. The automatic behavior of Docker, for example, is to place cooperative software elements like microservices on the same host as the components that use them and then on clusters of hosts called swarms, which are made up of coordinated host-centric deployment platforms. Docker and other container tools tend to do what microservices need -- colocate them with the rest of the applications by default. This kind of optimization can be done in VMs, but they require more explicit management through policy when hosting locations are selected.

The ability of containers to naturally group application components in defined clusters whose network latency can be controlled easily means that users are encouraged to think about the hosting policies for microservices. It also encourages them to define the scope over which such services can deploy, scale and redeploy. Since API managers are also container-hosted elements, they can be deployed where they're close to the microservices or to the applications that use them. In either case, this will limit the additional network delay that can accumulate.

Thinking beyond microservices

The final benefit of combining microservices and containers is the impact on overall cloud-hosting policies, planning and tools. We're certain to see a growing set of tools designed to identify the best way to cluster containers for microservices. These tools won't limit themselves to microservice deployment. Instead, they'll work to optimize component placement in general, ending current practices where application components tend to be thrown at available resources, regardless of whether the location of those resources raises failure risks or introduces unacceptable levels of delay.

Microservices and containers are both relatively new concepts, which means they can develop symbiotically and the needs of one can optimize the behavior of the other. This beneficial relationship will advance both microservice and container usage significantly, in 2017 and beyond.

Next Steps

Learn how the SOA model was disrupted by REST in 2016

How to ensure you don't suffer from microservices redundancy

Why working with microservices was a challenge in 2016

Advice for tackling microservices performance management problems

This was last published in January 2017

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Developers' guide to deploying microservices and containers

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What do you think is the greatest benefit of combining microservices and containers?
Cancel
We (appOrbit) like the portability of any application across any infrastructure, i.e. easily composable and scalable application components, continuous replication of DBs and Apps, rapid snapshot/clone and migrate of DB and Apps and easy to implement data pipelining and data governance policies.
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide.com

SearchWinDevelopment

DevOpsAgenda

Close