Because orchestration is so fundamental to SOA applications, its details will have a major impact on application...
performance. This tip offers guidance on minimizing protocol conversion in message exchanges, using design patterns to aggregate components, adding appliances to SOA applications and tuning the network for best response times.
Service-oriented architecture (SOA) principles have been recognized for almost two decades, but with the expanded interest in cloud and the as-a-service drive of mobile apps, SOA is reaching further than ever. That extended scope naturally brings SOA into applications that are more performance-sensitive, and here SOA principles of componentization and loose coupling collide with real-time principles of efficient interfaces and quick response times.
There are some general rules for balancing SOA performance and SOA principles, but special attention is required for applications whose logic is created by orchestration of distributed components. This is the very model the cloud is most likely to generate, particularly as mobile applications rely on logic offloaded from the device to the cloud.
In SOA applications, "orchestration" is typically used to define the process of building application logic flows by stitching components of the application together through a workflow engine, message bus or service bus. Many enterprise transaction processing applications are built on orchestration, most often through an enterprise service bus (ESB).
Fully featured ESB implementations include message queuing and management to deal with components whose availability may be transient or where there could be multiple instances of the component available. They also provide for a process definition language (like the Web services Business Process Execution Language or BPEL) and for interface and protocol conversion that allow components to be dynamically linked without having compatible native interfaces. Obviously this contributes to considerable flexibility, particularly when multiple organizations are involved.
The challenge of orchestration in high-performance applications is the overhead this flexibility creates. Some users have reported transaction processing times in the tens of minutes when extensive componentization was combined with the need for protocol and interface conversion and network connections to the application components. Where application performance is critical, this is unlikely to be acceptable, and you'll need to take steps to reduce orchestration overhead.
The first step is to insure that the workflow message format and the expected component interface formats are fully compatible. This eliminates the need for protocol and format conversion, which is often the most time-consuming activity. Where all application components are provided by the same organization, this is easy, but where packaged software or multiple organizations are involved, some tuning of component interfaces may be required. Try using the Adapter Design Pattern to harmonize interfaces, but test the result to ensure that it's actually faster than the message-bus conversion available.
The next step is to aggregate some components into a higher-level unit of logic where inter-component interfaces can be direct rather than message-based. Where little flexibility is gained by separating three or four components and passing messages between them, use a simple application to link these components into a single unit and then link that unit to the message service bus. This is particularly useful if the workflow involves several iterations of a group of components.
The limiting case of this approach is to write a "connecting" application that links all the SOA components directly and eliminates the message service bus completely. This sacrifices the flexibility of a bus, but if the application's inter-component relationships are fairly static it's a way of significantly reducing the orchestration overhead by eliminating the orchestration.
A third strategy is to employ appliances for tasks like database queries and analytics to replace discretely coded components. Often appliances will accept "scripts" that can be compiled and loaded into the appliance for activation on demand. This is normally much faster than developing a component-based analytics or database application, and because the message input or output to an appliance is usually a simple request and response, it may also reduce communications overhead associated with more extensive interactions among discrete components.
The next strategy to reduce orchestration overhead is to optimize the choice of message service bus implementation. ESB is an architecture model, not a standard or implementation, and a number of underlying combinations of message bus elements exist that can be combined to create one and multiple implementations for each. By picking only components you really plan to use and by testing implementations for performance, it's possible to cut orchestration overhead by 50% or more.
The final strategy in managing orchestration overhead is to improve the performance of the connecting network. To do this effectively it will be necessary to plot the message flows in an orchestrated application to spot flows that appear multiple times or at critical points in processing. The goal is to identify exchanges that might be contributing to overall delay and then reduce the delay by moving components "closer" in a network sense.
Remember that colocating components in the same virtual machine (vSwitch) or in the same data center LAN will result in better performance than where components are distributed over a WAN connection. Be careful with security and access control during these moves; sometimes a new host has a different access and security profile and compliance problems can occur if this isn't accommodated.
Orchestration of application functionality by passing work among components is a form of abstraction, and all abstraction poses the risk of hiding performance-significant details from view. Any changes made to application componentization or to workflow can have profound impacts on performance, and so it's critical to test every change in advance with sufficient data volumes to make the test a realistic parallel of production behavior. Adding this requirement to application lifecycle management goals is essential in keeping high-performance SOA applications working efficiently.