alphaspirit - Fotolia
Every good thing seems to have a compromising problem or risk. In software design and deployment, the risk is often related to security and governance. With microservices, the benefits seem to depend on open connectivity to support access and composition. But the risk is that such a model defeats any notion of security or governance, since explicit access control for microservices is complicated. Can proper networking practices help secure microservices and still sustain their flexibility?
Why microservices security is a challenge
Traditional security and governance are based on proven identity. Applications have access requirements that users must meet based on who they are. The challenge of microservices is that a service composed into many applications might have many different access requirements. They might even serve as an accidental bridge between applications whose requirements are different.
Suppose a worker wants to access the company payroll records. If she works in that department and has proper credentials, access control tools would grant her entry based on, for example, a user ID and password. But if the payroll application is divided into a dozen microservices, a user could potentially access one payroll microservice and then gain access to confidential information. Each microservice might be protected via access control, but it's not ideal to require users to re-enter IDs and passwords for every microservice. This is where the network can help secure microservices without breaking the chain of free composition.
The explicit model of networking
It's normal practice to deploy applications within a private subnetwork, only exposing the external components or APIs. This is the explicit model of container networking, and this model has become even more prevalent as container use has accelerated. The model is actually helpful from a microservices security perspective, because it hides interfaces that should never be used external to the application. Further protection may even be unnecessary.
This model breaks down when applications are built from components used in multiple applications. First, isolating these reusable components inside a private subnetwork means they can't be reused, so you have to expose them explicitly. The problem is that reusable components aren't really part of any single application; they're part of a pool of features.
It would make sense to deploy all shared microservices independently, meaning that they are all assigned to the same IP subnetwork. Then, they would either receive a public IP address so applications can use them or have their private addresses translated, typically to the address space of the company virtual private network (VPN). However, this makes them accessible to everyone, unless we further protect them.
API broker security vs. network-based security
One proven way to ensure microservices are addressable and secure at the same time is to rely on an API manager or broker to provide security. There are a number of strategies for securing microservices through an API broker; however, they generate overhead each time a call to the microservices is brokered. They can also increase the complexity of composing microservices into applications. As a result, teams should test API-based security and assess the consequences before they make a commitment. It's also possible to use the load-balancing and discovery tools associated with microservices access to add security, as the mechanisms and issues those tools address are roughly the same.
A second approach is to use network-based security. The applications that are authorized to consume microservices are almost certainly run in specific IP subnetworks, and the addresses that make microservice requests from these applications could be controlled and known. If traffic routing rules in the company VPN are established to limit access to microservices to the specific range of authorized application addresses, other traffic could not reach the microservices.
Either of these security strategies could limit your ability to dynamically compose microservices into applications, since you'd have to open your security gateway before the services can be connected. Security gateway processes can also make it harder to move changes through application lifecycle management processes, because, whatever you decide to use in production, you'll have to use in testing. Otherwise, your tests won't be reliable gauges of production behavior.
Microservices security and load balancing
You'll also need to examine the impact of microservices security on scaling and redeployment in case of failure. When a microservice scales, the additional instances have to be load balanced, which means your load balancer has to be accessible to client components and the microservice instances have to be registered with the load balancer. This can actually make things easier if the client components always link to a load-balancing function, since the load balancer is effectively a virtual microservice with a constant address. Load balancing also ensures work is divided up enough so that it's manageable and so that service outages are limited.
API brokerage and load balancing can be combined. In this case, the preferred approach is to place all the API brokers and microservices in a common subnet, so long as the broker supplies a security token to the calling application. If the broker actually makes the call to the microservice itself, place the microservices in their own subnetwork. Then, only allow traffic from the API broker to access the microservices.
Security depends on networking
It's not likely that network management practices alone can balance microservices benefits and risks. But that doesn't mean networking can't help, because microservices security should always contain a network-based component. If you hope to gain the full benefits of microservices, it's critical to accept that fact and work to optimize microservices security outside the network.