Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

The principles of service-orientation part 5 of 6: Service autonomy and statelessness

This is the fifth article in a six-part series dedicated to exploring the common principles of service-orientation. Author Thomas Erl shares his insights into the service-orientation design paradigm by providing excerpts from his second SOA book "Service-Oriented Architecture: Concepts, Technology, and Design," supplemented with additional commentary.

As organizations continue to build enterprise automation logic in the form of services, there is a growing need to increase the reliability and efficiency with which services are expected to operate at runtime. This need is amplified when assembling a service inventory with a large amount of reusable services.

Reusable services and concurrency

Reuse is a core part of SOA, so much so that several strategic goals associated with enterprise-wide SOA transitions are directly linked to successfully achieving the repeated reuse of automation logic. As a result, we need to ensure that the services we deliver do not just possess reusable logic, but are also capable of being reused once subjected to the real world.

In general, we are encouraged to foster and maximize reuse opportunities. Each service classified as "reusable" is made available to a potentially large number of consumer programs. The results are predictable. Over time, the same service will need to facilitate the automation of different business processes or tasks and may become part of numerous service compositions.

This eventuality translates into high usage volumes, unpredictable usage scenarios and a runtime condition we are especially interested in preparing for: Concurrent access. When a service is accessed at the same time by two or more service consumers, instances of the service are spawned. How this happens is, to a large extent, determined by the vendor runtime platform responsible for hosting the service.

However, there are important steps we can take in shaping the design of the service in order to structure the underlying service logic to best facilitate the condition of concurrent access and other reliability-related concerns.

This leads us to the two design principles we'll be briefly explaining in this article:

  • Services are autonomous
  • Services are stateless

If I'm building a program designed to consume a particular service, I may not know or care about how many others are already using this service (and how many more will use it in the future). I will have an expectation as to what the service is capable of doing, based on what is expressed in its service contract and perhaps in a supplementary SLA. Therefore, I will be relying on the service's ability to provide a predictable level of behavior, performance and reliability, regardless of how else it is being utilized. The application of these principles ultimately helps support a stable and, most importantly, a predictable service inventory.

Service autonomy

Service-orientation brings with it a serious attitude when it comes to decomposition. When building an enterprise service inventory, there is an extreme emphasis on positioning each member of the inventory as a standalone building block.

For services to provide reliable, predictable performance they must exercise a significant degree of control over their underlying resources. Autonomy represents this measure and this principle emphasizes the need for individual services to possess high levels of individual autonomy.

By increasing the amount of control a service has over its own execution environment, we reduce dependencies it may otherwise require on shared resources within the enterprise. Even though we cannot always provide a service with exclusive ownership of the logic it encapsulates, our primary concern is that it attains a reasonable level of control over whatever logic it represents at the time of its execution.

Because different measures of autonomy can exist, it can be helpful to distinguish between them. Below we single out two common levels:

Service-level autonomy – Service boundaries are distinct from each other, but the service may still share underlying resources. For example, a wrapper service that encapsulates a legacy environment that also is used independently from the service has service-level autonomy. It governs the legacy system, but also shares resources with other legacy clients.

Pure autonomy – The underlying logic is under complete control and ownership of the service. This is typically the case when the logic is built from the ground up in support of the service.

Clearly, it is more desirable for a service inventory to contain purely autonomous services. Not only does it help us deal with scalability concerns, such as concurrent access conditions, it also empowers us to position services more reliably to counter the "single point of failure" risk that often comes with leveraging reusable automation logic. However, because it generally requires that new service logic be created and often demands special deployment considerations, it can impose significant additional expense and effort.

Service statelessness

While autonomy is generally a well understood aspect of IT, there is often less clarity around what constitutes state information. For this reason we'll take a bit of time to define state management before we discuss this principle.

State refers to a particular condition of something. A car that is moving is in a state of motion, whereas a car that is not moving is in a stationary state. In business automation, it is understood that a software program also has two primary states associated with it:

  • active
  • passive

The first represents the software program being invoked or executed and therefore entering in an active state. The other is when the program is not in use and therefore exists in a passive or non-activate state.

When we design programs, we are very interested in what happens when they are active. We are so interested, in fact, that we have additional states we apply to the program that represent specific types of active conditions. In relation to our discussion of state management, there are two primary conditions:

  • stateless
  • stateful

These terms are used to identify the active or runtime condition of a program as it relates to the processing required to carry out a specific task. When automating a particular task, the program is required to process data specific to that task. We can refer to this data as state information.

A program can be active but may not be engaged in the processing of state information. In this idle condition, the program is considered to be stateless. As you may have guessed, a program that is actively processing or retaining state information is classified as being stateful.

As the processing demands on services being reused and composed and concurrently accessed increase on a regular basis, so does the need to optimize the service processing logic. When designing service-oriented architectures, state management requires extra attention. Therefore, the focus on streamlining the management of state information within the architecture is emphasized to the extent that we now have a principle dedicated to this aspect of service design.

This principle states that services should minimize the amount of state information they manage, as well as the duration for which they remain stateful. In a service-oriented solution, state information usually represents data specific to a current service activity. While a service is processing a message, for example, it is temporarily stateful (Figure 2). If a service is responsible for retaining state for longer periods of time, its ability to remain available to other concurrent consumers will be impeded.

As with autonomy, statelessness is a preferred condition for services and one that promotes reusability and scalability. For a service to retain as little state as possible, its underlying service logic needs to be designed with stateless processing considerations. Furthermore, the architecture itself needs to be equipped with state deferral extensions that support the application of this principle across a wide range of services.

What's next

Statelessness and autonomy go hand-in-hand in service design. Each supports the goals of the other and both ultimately support fundamental goals of SOA. In our final installment of this series, we will highlight some key relationships between service-orientation design principles and explore how the application of this paradigm can be further elevated through the use of service abstraction layers.

This article contains excerpts from "Service-Oriented Architecture: Concepts, Technology, and Design" by Thomas Erl (792 pages, Hardcover, ISBN: 0131858580, Prentice Hall/Pearson PTR, Copyright 2006). For more information, visit www.soabooks.com.

About the author

Thomas Erl is the world's top-selling SOA author and Series Editor of the "Prentice Hall Service-Oriented Computing Series from Thomas Erl" (www.soabooks.com). Thomas is also the founder of SOA Systems Inc., a firm specializing in strategic SOA consulting, planning, and training services (www.soatraining.com.) Thomas has made significant contributions to the SOA industry in the areas of service-orientation research and the development of a mainstream SOA methodology. Thomas is involved with a number of technical committees and research efforts, and travels frequently for speaking, training, and consulting engagements. To learn more, visit www.thomaserl.com.


This was last published in June 2006

Dig Deeper on Service orchestration

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide.com

SearchWinDevelopment

DevOpsAgenda

Close