idspopd - Fotolia
Most enterprises and their software architects realize that not all of their applications will be moving to the public cloud, and so hybrid cloud design is of critical importance. Many don't understand that API design can have a major impact on hybrid cloud deployment options and efficiencies. API design always has to address the questions of optimizing the component model and workflow, deciding what parameters to pass, and managing state and workload. To build APIs optimum for hybridization, pick a hybrid model to meet goals and inventory application workflows, and design APIs to facilitate state management and load balancing.
Selecting a hybrid cloud model
While everyone is embracing hybrid clouds, there are multiple hybridization models to consider, each with their own special API issues. Most enterprises who plan hybrid clouds have adopted the Web front-end model of hybridization, where the user access portion of the application is migrated to the public cloud, but accepted transactions are passed to the data center for processing.
The second-most-popular model is the cloud bursting model, where public cloud resources are expected to supplement data center resources during heavy load or in time of failure. The third most common hybrid model is the offloaded analytics model, where cloud applications are used to analyze historical data, including big data.
It's important to know which models need to be supported and how a company prioritizes its interest among them. While the hybrid models being targeted should be kept in mind, it's best to start with the most important one, so set your overall API strategy.
Reviewing API design choices
As is often the case, the first step in analyzing hybrid API options is to assess the workflows of the applications involved. Avoid being trapped in current componentization models; API design should always start with the business process flows, preferably from enterprise architect modeling of those processes. To do this, add in database access flows associated with each of the business processes. This combination will let you decide how information movement and your hybridization model interact.
Experience says that cloud applications are most efficient and consistent in performance when a single component is responsible for accessing database information, or at least where database access is performed by a group of collected components rather than scattered throughout the workflow. This focus limits the times when moving a component into the cloud will require crossing the hybrid cloud border with database access, something that's often problematic in terms of performance. By focusing database access, a virtual record is created that sums up an application's database needs.
Taking workflow into account
Architects looking to support the Web front-end model should now frame their application with the assumption that the user interactions and GUI will be supported in the cloud. The Web front end will pass data to an application on-ramp component that will then establish the database context. Most applications that use mission-critical core databases will run this component in-house.
This same approach can be used for the analytics model of hybridization. In most cases, the cloud portion of the application will accept and validate the query, then dispatch it to the application on-ramp where actual database access will take place. If it's possible to host an abstracted or summarized database in the cloud for some inquiries, the separation of queries between that cloud DBMS and the core DBMS is done by the cloud portion of the analytics application.
Choosing between strategies
The APIs to support everything persistently in the cloud, and also the on-ramp component, should optimally be RESTful, with the full user request sent to the on-ramp component and the result returned. Once the on-ramp component has been reached, there will be two possible strategies -- pass the transaction data model intact among the other components, or select portions of the model to pass.
In the latter case, a cache DBMS will be needed to store the model so it's available later. In the former case, RESTful APIs can be used throughout, but where specific data elements per component are to be supported, it may be beneficial to consider the SOA model for better documentation of the data needs of each component.
The cloud bursting model of hybridization is the most complicated because it presumes that components would move into and out of the cloud based on the current workload and the state of data center resources. This is the area where state control is critical, and the transaction data model previously discussed can be used to manage state and also to serve as a means of distributing work among multiple component instances.
Most architects would agree that it's easier to dynamically move instances of a component or scale horizontally if the API is RESTful. DNS-based load balancing can also accommodate failovers between the data center and the cloud. If state is controlled by the transaction data model, that's all that has to be passed to a component instance for it to operate. Here it may be wise to pass the entire model rather than selected parameters unless the data model is too large. A component that's been moved or newly instantiated may not have ready access to the data model if it was stored by a different component.
A good point to keep in mind in designing hybrid cloud APIs is that application agility and resource efficiency are both driving changes in this area. That means that strategies that are highly flexible are likely to be best, and a single transaction data model may be a good way to approach that goal of flexibility and reduce the risk that new issues will force API changes that will be expensive and time-consuming to make.
About the author:
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982.
API design with the cloud and agility in mind