Application lifecycle management (ALM) has never been easy, but every enterprise and software architect knows how...
fast security, compliance and simple utility can deteriorate if software development and change cycles aren't controlled. Modern trends toward multi-component applications and component reuse have complicated ALM by driving application components in different directions at the same time. This makes testing and coordinated deployment almost impossible using conventional means.
Service virtualization is a way to simulate component functionality so ALM isn't stymied by unresolved dependencies and unexpected collisions, but it's not foolproof. Architects need to:
- Understand what service virtualization solves
- Plan ALM and componentization to optimize service virtualization utility
- Think in terms of evolution
- Beware of the "never-convergence" problem
Virtually all current ALM tools and practices are designed for linear, non-colliding, application change processes. As component reuse becomes commonplace and applications make more use of virtual resources, legacy approaches will fail.
ALM and testing scenarios
Even application development is impacted by this problem -- it's often impossible to test components because their dependencies can't be satisfied yet. Pure component unit-testing is rarely adequate because it tends to depend on a developer's own view of interfaces and functionality, and of course testing the developer's work based on developer standards doesn't accomplish anything.
The whole value of service virtualization is to virtualize the component and user elements of workflows.
Service virtualization effectively wraps service-oriented component development in a virtual environment that can satisfy dependencies without requiring partner components to be present. A component inside a service virtualization shell can be tested without constructing a complete, functional and running implementation of the application(s) that use that component.
If service virtualization scenarios are constructed independently of component development, there is far less risk of self-fulfilling test practices hiding problems. Furthermore, testing can be more thorough and even proceed in parallel on multiple applications that share components.
In an implementation sense, service virtualization is about prototyping the development of a simulation framework that can resolve dependencies on a component level; generate test data at target volumes; and support result analysis, load measurement and response timing. If any of these capabilities are missing, the service virtualization shell won't properly test the application.
The traditional way to address simulation accuracy in service virtualization is to derive the prototype framework from the EA process output independent of development. Service virtualization architects should represent the user or EA activities, and derive workflow content and volume information from the business processes coupled to the components.
The information content of these flows is then tracked through component interfaces along the workflow. This provides the data relationships needed to create an accurate simulation without having component developers bias the process with their own expectations.
Using a transformation diagram
Some architects already recognize the value of a transformation diagram that illustrates the relationships between primary data supplied by business processes and primary output supplied back to business processes. These diagrams show how a source field relates to destination fields created by reference to, or processing of, this primary data. It also identifies the process that creates each of the links or transformations in the diagram.
Transformation diagrams are valuable when creating consistent simulations. When a diagram shows a single component responsible for many simulations and information that flows out and back into the same component(s), it's representing a situation that will be hard to simulate effectively. When there is a lot of recursion in workflows, it's likely a warning to rethink componentization into a more logical step-wise transformation from input to output.
More on service virtualization and testing
Web among technologies fueling need for virtual testing
Service virtualization techniques aid in defect detection
Tips for smooth SOA testing deployments
Service virtualization has to be considered a partner in a component sense. It should not only be possible, but easy to replace the service virtualization simulation of a component or interface with a live module. Component testing using service virtualization evolves into multi-component testing as simulated components and interfaces are replaced by their real equivalents. Thinking this way also simplifies the creation and maintenance of service virtualization because it practically forces the development of a complete, yet modular, framework.
The complete/modular combination is also the best defense against the most insidious of all service virtualization problems -- never-convergence. Service virtualization testing doesn't converge on system testing at all in many cases. If an error in service virtualization creation occurs, there's a chance the problem will be caught only when pilot testing of the complete application/set of applications is underway.
The whole value of service virtualization is to virtualize the component and user elements of workflows. Any virtual or service virtualization component should be replaceable by the real thing when available. That makes service virtualization testing naturally convergent on the pilot test process, and allows it to transition to the normal end of ALM control.
Any simulation is useful only to the extent that it's accurate and functional. The former depends on relentless coordination between EA business process input/output elements and service virtualization simulations. The latter depends on the ability to represent each element properly in the transformation diagram representing the data-driven view of an application. Do this all right and service virtualization will contribute not only to effective ALM, but to effective application design.
About the author:
Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982.