Manage Learn to apply best practices and optimize your operations.

Rethinking modular programming interfaces for microservices

As computing methods change, IT needs to rethink modular programming. Tom Nolle examines Google's gRPC to determine if it can help.

Traditionally, modular programming has been visualized as creating "procedures" as functional elements of applications,...

and so procedure calls has become the mechanism for linking them into a monolithic structure. When it became necessary to separate components over the network, the logical step was the remote procedure call (RPC). It's reasonable that for APIs related to microservices, we should look to another evolution: Google's gRPC.

RPC-based APIs are somewhat different from those found in the front-end processes of Web-based transactions. These APIs, which can be simple REST/HTTP or JSON interfaces, typically pass limited information elements in display form. Even such applications as M2M or phone and tablet credit card processing benefit from binary data exchanges, and binary data is the standard for server-to-server connection, including the connection between a microservice and its "store front." RPC APIs often pass binary data and complex structures, and a number of industry players began work on an HTTP-compliant RPC model. However, Google's gRPC seems to be the emerging standard.

A remote procedure call is almost always a kind of micro-transaction. In this case, a procedure is with a specific set of arguments and it returns specific results. Google's gRPC uses what some developers will find a familiar concept: protocol buffers. This term describes a way of communicating both data structure and contents in a shorthand way across a network connection. The term "serialize" is used often; all it means is that the protocol buffer takes binary data as a structure and turns it into a serial bit stream that can be reformed into a structure at the other end. XML offers some of this same capability, but protocol buffers can be up to 100 times faster than XML, and implementations of the encoding and decoding of streams are often a tenth the size.

The key thing about gRPC for developers is that it lets them write an application or component as though all the pieces of code were in a single place -- a monolithic build. As needed, developers can spin pieces of functionality out of the main component, leaving a gRPC stub behind to represent the now-remote piece. A network connection and protocol buffers will then transport a request for that function over the network to the function, wherever it might be, and return its response. The rest of the application still sees its familiar local component -- now the gRPC stub.

For applications built to be network-connected, gRPC still offers a benefit. Its mechanism is language-independent and there's a gRPC stub and server logic library available for all the popular programming languages. A single application can be built from a combination of languages, with gRPC acting as the functional glue that melds the pieces into a cohesive application.

At a basic level, adopting gRPC is easy; components are written that incorporate the gRPC library elements for server or client side, and APIs are structured in inquiry-response mode. This process is a bit more like traditional modular programming than it is like Web-oriented get/post functions, but it's also more flexible, and it fits well into the model of microservices where the "client" for gRPC is the main store front component and the "servers" are the microservices. The former gets the gRPC stub and the latter gets the remote implementation.

The experiences of Google and others with microservices -- experiences that led to gRPC and the standards initiatives now growing out of it -- demonstrate that microservices will benefit from being designed as homogeneous applications made up of modular procedures. This is a departure from normal multi-component, network-coupled design practices that require the componentization of logic be considered first and that APIs be designed for the specific module pairings involved in a workflow. Applying this to microservices can be difficult, because it may be hard to visualize the optimum microservice structure.

With gRPC, developers can spin microservices out of an application or component if they determine the spin-out is helpful in improving availability or performance, or assign a module to a different programing team using a different language if that will speed implementation. Preserving this capability is important for microservice transitioning, and a few steps will ensure that the transition can be made when ready.

First, remember that gRPC is an evolution of RPC, which means that it can be applied where a function was expected to be local. If a specific structure of remotely linked components is designed into an application, it may limit flexibility in microservice use. Consider designing simple applications as though they were monolithic collections of local procedures, and where complexity makes that impossible, divide the applications based on workflow (front end, edit/process, update) for easy translation into microservices.

Second, all microservices will have a store-front or strip-mall structure, and it is critical that this structure not become too deep, with microservices invoking other microservices through gRPC. This kind of cascading workflow will almost always generate performance problems and it can also make an application more vulnerable to network faults.

Third, although gRPC is efficient, it's not zero overhead. Even where communications connections are local and fast, a lot of message serialization can affect application performance. It's easier, with a gRPC mechanism, to translate local procedures to remote ones, to overdo componentizing applications into independent services. Testing and validation of performance levels should precede any massive change in the localization and remote-deployment structure of an application.

Microservices create server-side or "internal" workflows in applications -- flows that are best served with a different and less Web-linked model of APIs. Google's gRPC example is already generating tools and consensus, but its practices and directions alone can help developers get the most from their cloud microservices.

Next Steps

Discover four reasons you need microservices architecture

Get initiated with microservices architecture 101

This was last published in September 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

The world of microservices and container technologies

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Has your company begun using APIs specifically aimed toward microservices? Why or why not?
Cancel
We have started using APIs that support Mirco Service architecture. The main reason is to allow code reuse and simplify and speed up application development.
Cancel
The biggest problems with APIs, aside from the potential performance hit you mention, are a) them getting changed along the way and b) broad industry support.
Cancel
I think APIs being changed along the way is one of the things that gRPC is trying to address. They are saying that the APIs are generally designed for the specific module pairings involved in a workflow, which is hard to do when the componentization of logic is considered first, which leads to the APIs changing along the way. gRPC helps by taking that factor out, so the API should be less likely to change.
Cancel

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide

SearchWinDevelopment

Close