Over the past few years, ZapThink has participated in and observed quite a few service-oriented architecture (SOA) deployments and roll-outs. While some of these deployments were more Web services Integration than truly Service-Oriented, and while some were more successful than others, the act of simply iterating the architecture and putting services into production is forward progress for most companies. What these companies quickly realized is that creating services and deploying infrastructure to run them is actually the easy part. Successfully getting users to consume those services in a reliable manner, all the while supporting continuous change is the hard part.
ZapThink has written numerous times about the challenges of putting services into production with proper Governance, Quality, and Management, but in this ZapFlash, we'd like to focus on a more basic issue: that of service performance. Just like the terms "security" and "quality", performance is an abstract term. It can mean many different things to many different people. And this is where the trouble starts.
To operations and networking individuals, performance is a matter of guaranteeing up-time, managing utilization of resources, and keeping Services secure and running without glitches. To business analysts and project managers, service performance is a matter of making sure that the processes are performing according to the specifications. To developers, service performance is a matter of making sure that the functional requirements are being met. To the business, service performance is a matter of meeting key performance and agility indicators. And so, to properly define service performance, we have to look at the concept from all these perspectives.
The first thing that comes to mind when most think about service performance is the aspect of operational performance. Operational performance is how a service, any service, behaves in the IT environment. Operational behavior includes the uptime and availability of the service as well as resource utilization and distribution of load across multiple service implementations. In this regard, we can use traditional systems management approaches to monitor, measure, and manage service operational performance.
However, SOA introduces a big twist in operational performance. As we discussed numerous times on the topic of SOA Management, services uptime is impacted not only by the availability of a service to respond to requests, but also by the metadata that controls service behavior. In addition, there might be multiple, distributed implementations of a service throughout the network, and so managing service operational performance becomes more like managing a service "grid" than individual, discrete services.
On top of this distributed, loosely-coupled complexity, we have to consider that security in a SOA environment is much more complex and distributed. This means that network administrators have to consider policy management as an aspect of service performance.
High operation-performance services should introduce no additional latency, usage load, or security gaps, for sure. But, thinking about services being consumed in a continuously changing environment requires network administrators and operational types to rethink the concepts of Service Level Agreement and Quality of Service as well. In this case, the word "service" in SLA and QoS truly means a service in the SOA context. So, rather than Service Level Agreement broadly applying to any kind of services, we're looking at Service Level Agreement as applied to services in the SOA context. Perhaps this can be called a SOA-Level Agreement?
Furthermore, instead of thinking of Quality of Service in the broad, perhaps we can think about Quality of SOA in the operational context? Indeed, while there's been much discussion about SOA for almost a decade, we still have not fully defined these terms from a SOA perspective, and this is what makes trying to understand and optimize service performance from an operational perspective still a challenge for most IT organizations.
Knowing whether or not a service is responding to requests or overloading infrastructure resources is not enough to know if a service is performing. From a developer's and implementation-focused architect's perspective, a service is performing if it provides the results expected by the consumer. This means that applying SOA Quality in the way that we have defined in previous ZapFlashes.
But even testing and continuously validating the services against continually changing business requirements and the metadata-controlled environment is not enough to guarantee the functional performance of a service. High functional-performance services meet the meta-requirement of agility meaning that we can find new usages for existing services without requiring new development. Architects need to measure and manage the Agility Model for each service to guarantee that they continue to provide high functional performance. And remember, a service is defined more so by its metadata than its implementation. This means that organizations should measure the functional performance of its metadata more so than the implementations to guarantee continuously high performing services from a functional perspective.
Yet, the power of SOA comes not from discrete, atomic services, but rather from the composition of those services to satisfy the requirements of business processes. In this light, operational and functional performance matters only if the business process as a whole is performing for the enterprise. So, what do we mean by SOA process performance?
A process is characterized by high performance not only if it operationally and functionally meets the process requirements (as we defined above), but also if the process itself can exhibit high performance characteristics. This means that architects and business analysts should be able to optimize service-oriented business processes and choose among different process definitions to find the one that most optimally meets the business requirements. A high process-performing SOA has business processes that architects can easily reconfigure and recompose as well as measure through process-specific performance indicators.
Clearly, what makes one SOA process more high performance than another is not simply that it consists of services, but rather that architects define and manage it in a way that allows the process to evolve along with continually changing business requirements. This means that architects should create Process-Level Agreements (PLAs) that reinforce, but are separate from the SLAs defined in the operational and functional contexts above. An example of a PLA might be that users must be able dynamically reconfigure the process in the case of some exception within a certain amount of time. Other PLAs can define user involvement in parts of the process or behavior of long-running, asynchronous processes.
From the business perspective, the only aspect of performance that really matters is whether the IT ecosystem is meeting current business requirements. At this level of performance granularity, a high business-performing SOA is one that satisfies three business "TLAs" (three-letter acronyms): Key Performance Indicators (KPIs), Key Agility Indicators (KAIs), and Return on Investment (ROI).
Much has been written on the topic of KPIs, so to make this more specific to the SOA story, one should understand that businesses don't really care about architecture; they care about whether their core needs are being met. This might include the number of customer support calls handled per hour or per support rep, the amount of revenue generated per month or per customer, the amount of time it takes to manufacture or distribute products, the number of deals won against competitors, etc. Each of these are measurable KPIs that architects can easily instrument within an SOA. In this vein, a high business-performing SOA is one that not only meets current KPIs, but easily enables the business to create new KPIs and measure those without any extra cost or latency.
The concept of KAIs is newer, and relates more to the qualitative aspects of a business to change. Examples of KAIs might be the rate at which a company can introduce new products, or the speed at which HR can provision new employees, or the amount of time and cost involved in adding new business partners to the supply chain. A high business-performing SOA must meet the KAIs, and one should probably surmise that service-orienting a business should in general make it easier to meet KAIs.
Of course, to C-level executives, the only measurement that matters is the business bottom line. The ROI metric measures the business return on any investment, and high business-performing SOA must be able to not only exhibit a positive ROI, but also be able to measure the ROI of the rest of the activities in the organization. In this way, one can say that a high business performing SOA is profitable and makes sure that the rest of the business is profitable, too.
The ZapThink take
Dr. Peter F. Drucker, the often quoted "father of modern management" once said, "you can't manage what you can't measure." This is just as true with SOA performance as it is with every other aspect of the business. What makes SOA performance unique is that, just like other areas in SOA, performance brings together aspects of IT have previously been considered as separate concepts. Service performance is at the same time loosely-coupled in that one performance criterion should not have any impact on any other as well as composite in that we can combine different aspects of service performance together to achieve an overall behavior of the whole system.
In this manner, enterprise architects need to perceive service performance from all the viewpoints defined in this ZapFlash. From this perspective, high performance SOAs are low-latency, high-availability, high-quality, functionally-relevant, continuously configurable, business-relevant, and profitable. If you can bring all the aspects of service performance together and make it all work, you'll make your SOA valuable and successful.