The advent of cloud computing has not only expanded the service-oriented architecture (SOA) vision geographically, it's actually driven all of SOA toward a cloud-based application-to-resource relationship. SOA applications, then, are the easiest of all to migrate to the cloud.
Even though it is easiest, migration is not automatic. Exploring SOA's basic elements, componentization and resource classes, this tip can help put cloud application architects, developers and DevOps managers on the path to simpler SOA-based cloud migration.
The concept of SOA has been around for more than a decade, driven by a combination of developer desire to create software from reusable components and a business need to customize application behavior to optimize worker productivity.
SOA infrastructure consists of four basic elements: the processor server system; storage' OS and middleware; and user-to-application mapping and load balancing. Obviously these are the same four elements that make up any IT infrastructure, but SOA changes the way that businesses should balance their capabilities in each of these areas to optimize availability and performance while controlling costs. Often the way this is done depends on the SOA architecture model and the way applications are componentized and deployed.
The Open Compute Project offers guidance on hardware design that could be used as a general reference to compare commercial devices, but this may not create the optimum platform for SOA.
The most significant difference between SOA and "atomic" applications is the componentization. Good SOA applications are divided into functional components orchestrated to create applications, and this creates important infrastructure impacts:
- Components may use more specialized resources than entire applications would. The SOA application that does analytics on a database might separate the analysis and the database functions into separate components, one of which would be highly compute-bound and the other highly disk-bound. The separation could allow purpose-built hardware at lower cost.
- Componentized applications create "horizontal" traffic among the components in addition to "vertical" traffic between application and user. This change in traffic patterns affects data center network design, promoting fabrics over switch hierarchies, for example.
- Components can be replicated to increase their overall capacity to do work, which requires that there be a specific set of tools to assign work to one of a set of SOA component copies based on some set of cost/performance policies. This introduces inter-component load balancing.
- Components "close to the user," meaning related to the user GUI, may be moved to be geographically closer; sited adjacent to the point of activity.
The net of these factors is that enterprises can best view SOA infrastructure in terms of "resource classes," a number of system or storage configurations that support the majority of their SOA components efficiently. The number of resource classes will depend on the range of SOA component needs, but are likely to include the following:
- Database and query servers, designed to support large databases using principles of hierarchical storage. These will likely have very large RAM complements, flash storage capability, fast disk storage I/O interfaces, high-performance network connections and modest compute capabilities.
- Compute and analytic servers, designed to perform complex calculations. These would typically require both RAM and a large number of fast processor cores, and possibly even have GPUs for accelerated compute functions.
- Distributed servers, to support local processing that is either related to user GUI creation or to event processing. These might be low-cost microservers based on ARM technology rather than traditional x86 devices.
The distribution of components, and in particular the replication of SOA components to create more execute paths and improve performance, means that SOA infrastructure will need some form of gateway. There are two options: appliances and "virtual gateways." An appliance, often called a "load balancer" or "Level 3 switch," directs traffic to an available component based on some scheduling policy. Virtual gateways use SOA directory functions to assign components as needed. The best approach will depend on the nature of the inter-component relationships, and in particular whether components will in fact be replicated to increase capacity. Gateway appliances are most popular as the link between application users and SOA components; virtual gateways may be a better approach for back-end intercomponent flow management, including workflow engine message or service bus flows.
On the database side of SOA infrastructure there are also "virtual" and "physical" issues to be considered. Data grid technologies that allow applications to access data that is highly distributed geographically, such as Hadoop, will often distribute applications to multiple nodes to run in parallel, then collect and correlate the results. This creates a need for a hybrid compute and storage node. On the other hand, many "big data" and analytics applications today are based on appliances or special-purpose nodes accessed via a query language. This allows SOA application components to be divorced from the data analytics function, which reduces the need for special data interfaces or CPU and GPU analytics features.
The gateway and database examples here illustrate that SOA infrastructure has to be considered at a virtual or logical level first and at a hardware, software and middleware level second. Effective SOA application design will depend first and foremost on the way in which applications are componentized and orchestrated into orderly workflows. This process is what creates the software elements that will have to be hosted on resources, the requirements for middleware to stitch applications together and the optimum hardware for hosting each component or component class.
Most who look at this point will realize that all SOA applications are evolving toward a cloud-modeled application-to-resource relationship, regardless of whether there is any explicit plan to host the application on a public, private or hybrid cloud. In fact, "cloud SOA" differs from the modern form of SOA primarily in the assumptions it makes about the extent to which hardware resources are distributed. Most SOA applications today run within a data center; cloud applications must presume that the resource pool spans multiple data centers, and may even be globally distributed. The greater the extent of the resource pool supporting the SOA application set, the more important it is to create efficient network connections to carry interprocess (inter-SOA-component) traffic. If the traffic patterns SOA workflows create are planned and analyzed carefully, the network requirements for SOA can be easily upgraded to support a cloud migration, making well-planned SOA applications the easiest of all applications to migrate to the cloud.