Cloud computing has a lot to offer a service-oriented architecture. Provisioning resources on demand can cut out hardware costs and increase scalability, multi-tenancy can conserve space and, on some platforms, you never even have to own the hardware you run on.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But to get the most bang for your buck out of cloud computing your application infrastructure needs to be optimized for these sorts of architectures, said Paul Fremantle, CTO at open source middleware provider WSO2. In other words, the application infrastructure needs to be designed for a cloud environment rather than packaged up and ported onto it.
In a recent blog post, Fremantle laid out some of the qualities he feels applications and middleware need to be considered “cloud native.” First on the list was that they need to be distributed:
It must be able to have multiple nodes running concurrently that share a configuration and share any session state, as well as logging to a central log, not just dumping log files onto a local disk. Another way of putting this is that it is clusterable.
In addition, he wrote that cloud-based systems should scale based on load, handle multiple tenants and be incrementally deployed and updated.
This is something I’ve been hearing about for some time and from a number of sources. If you take your existing applications and put them on Amazon EC2 virtual machines while only doing the re-work required to get them operational, all you’ve done is eliminate hardware costs. But the applications themselves keep running much like they always have.
Cloud computing offers an opportunity to write modular applications for a system that never needs to come off line and that can grow and shrink as needed. While anyone can virtualize their hardware, the real business advantage may be in rewriting applications optimized for these new distributed cloud environments.