BOSTON -- As the pace and methodologies of application and service development change, the middleware used to keep those things integrated is undergoing some heavy changes as well. And as the nature of middleware changes, the way that middleware is managed is changing.
In an interview at the 2017 Red Hat Summit, Rich Sharples, senior director of product management for middleware at Red Hat, explained how organizations can avoid having the middleware layer cause unwanted performance latency and how that plays into the decision of creating fine-grained services. He also explained what the changing nature of application middleware means in terms of how management responsibilities should be assigned -- assuming an organization needs to manage it on their own.
Distribution can cause latency
Sharples pointed out that when organizations move to more of a microservice-based architecture, they are making a conscious decision to improve agility by delivering isolated changes and having smaller deployable units.
"That naturally leads to distribution, and that's going to add some additional latency," Sharples said. "So, you've really got to think about the architecture of the application and how fine-grained you make those services."
Sharples recommended that when it comes to deployment, software teams need to think about where services "live." This requires making architecture decisions around leveraging cloud providers that will allow teams to colocate services.
Sharples did admit, however, that moving applications to an architecture aimed at providing more agility may cause software teams to pay a trade-off in terms of application performance that must be addressed.
Monoliths not necessarily an alternative
Despite the fact that distribution can add unwanted latency in terms of performance, Sharples pointed out that monolithic architectures can also create their own performance bottlenecks.
"One of the problems with monolithic or large-grain services is that they often become bottlenecks as well," Sharples said. "If I'm running the entirety of my application in a single process, at some point I'm going to hit some CPU contention."
Sharples said that by breaking up certain services, possibly across clusters, organizations can gain much more concurrency among those services. In order to decide which approach will work best, he said, organizations should start by breaking monoliths into more fine-grained services in a nonproduction lab setting. Making this determination also requires the establishment of key performance metrics.
"You've got to set some performance and latency metrics that are acceptable to your customer," Sharples advised. "And you've got to make sure the new architecture will meet those."
Changing middleware management responsibilities
Sharples said that the changing nature of the middleware layer has affected not only the architecture decisions that have to be made, but also decisions regarding who is responsible for governing middleware.
In some cases, he explained, middleware capabilities are provided as services in the underlying platform, in which case, managing that middleware layer is the responsibly of the platform itself. However, other middleware offerings are delivered as Maven artifacts and embedded in an application, in which case, application development teams will have responsibility for management.
"In many ways, it's a little bit of a cleaner split we can make between what the [application development] folks own versus the people who own the infrastructure," Sharples said, adding that DevOps plays a key role in terms of solidifying management within any organization.
Access the companion video to this piece, where Sharples discusses the changing nature of middleware and what this means for organizations.