The term serverless generates a lot of discussion. What exactly does it mean, and how can it help developers move...
from a monolithic architecture into a distributed one? Similarly, there is also confusion about the different benefits of containers and serverless architectures. Both architectures are modern approaches to application management, and each has specific benefits.
The best way to understand the difference between containers and serverless architecture is to look at the developer communities around each. Most documentation for Docker's container approach addresses issues that surround how to manage your infrastructure. The tools are designed to help more easily manage underlying hardware or virtual machines, and spread containers across multiple servers or instances in AWS. Documentation that addresses serverless frameworks and activity within the serverless community tends to focus on building serverless applications.
Fundamentally, serverless lets developers focus on writing code. There are still servers somewhere in the stack, but the developer doesn't need to worry about managing those underlying resources. While services like Amazon Elastic Compute Cloud (EC2) require you to provision resources for the OS and the application, a serverless architecture simply asks how many resources a single demand of your function requires. For example, a web testing suite might require 128 MB of RAM for any single website. Even if you deploy 10 million copies of that function, each individual one needs only 128 MB. They can even all run at the same time. Serverless focuses on what each individual request requires and then scales automatically.
Approaches to serverless development
There are several different approaches to serverless development. Most developers who transition from a traditional framework, such as Flask, Rails or Express, might choose to use a serverless framework, such as Chalice for Python or Serverless for Node.js. These frameworks are similar to the traditional ones, which help ease the transition for those developers.
Unfortunately, there are size and complexity limits to the single-framework approach, so developers who might build an old-style, monolithic application will quickly run into issues when they try to migrate that app to serverless. For example, a single AWS Lambda function can only be about 50 MB. This might seem large, but this also includes all third-party dependencies, as those must be included at deployment time.
Additionally, when a developer uses AWS CloudFormation, he will discover there is a limit to how complex the APIs can be. Therefore, he will need to split them apart once you have too many endpoints or operations. Furthermore, all of the same pitfalls of any monolithic service apply, so it becomes harder to upgrade, harder to maintain and a single-point-of-failure for the environment. However, with a single function, cold starts are easier to manage.
Microservices are a different approach to serverless development. In this case, you can still use a framework, but split the API into multiple microservices. This approach lets you share code between services via private packages that communicate via AWS Lambda invocations. Consider a scenario when a company operates an email marketing system which is comprised of several different microservices. Its application is hosted out of Amazon CloudFront, which uses one service to let users build a template, another service to let them pick the recipients and a third service that does the actual emailing. Each of those services is also split into separate microservices. The emailing service first builds a list of recipients in one function. Then, it passes the email template and recipient list along to another function, which splits that recipient list and passes each recipient, plus the email, to a third function to do the emailing.
Serverless functions are often chained together, which is a common pattern that can help mitigate the five-minute runtime limit, as well as the 50 MB size limit. In the email marketing system example, the first function, which handles building the recipient list, needs to have access to Amazon DynamoDB to pull down recipients. But it doesn't need to have the code installed to process the email template or send the actual email messages.
The last function, which does the actual emailing, doesn't need access to DynamoDB, but it does need to know how to build the template from the input data. But most importantly, none of these functions need to be exposed via Amazon API Gateway. Instead, that's handled through a separate service which simply takes a user request, authenticates it and then passes it directly along through an AWS Lambda call to the emailer stack.
For complex interconnected services, such as the aforementioned email example, developers can choose to use AWS Step Functions instead of connecting Lambda functions together by hand. This builds in additional support for error conditions, adds more retry logic and can automatically handle state and data transfer between functions. Each function is still completely isolated, but the state and transitions are all handled by AWS.
Tools for debugging serverless architectures
Traditionally, developers could simply log into the system, run the application, tail logs and test input to debug. In a serverless architecture, there is no server to log into, and running locally can be a lot more complicated. Some AWS plug-ins, such as Serverless Offline and SAM Local, offer support for running a majority of applications offline. However, these don't function so well when an authorization step happens in another repository or where there are multiple functions that need to be chained together. In many cases, developers must run their own stack for development and test and then push changes to a development AWS account.
There are several tools that can help developers and operations teams identify application problems and track down performance issues. AWS X-Ray can automatically trace issues with other calls to AWS. For most applications, it will only take a few lines of code to enable X-Ray. It can then show network graphs and point out issues with, for example, provisioned throughput on DynamoDB tables or Lambda concurrency limits. Console logs from both standard error and standard output within a Lambda application are directed to Amazon CloudWatch Logs, which can be ingested into an Amazon Elastic Search instance or interacted with directly through the API or AWS Management console.
There are also third-party tools and services that help with serverless tracking, such as IOpipe and New Relic. Also, there might be logs from other AWS services, such as API Gateway, that include valuable information when debugging issues. It's more complicated to actively monitor services because traditional monitoring tools, such as Pingdom, don't offer a way to test functions, just API endpoints. As a result, developers must build tools to run tests and expose those via APIs in order to use traditional infrastructure monitoring systems.
The serverless tradeoff
Overall, serverless lets development teams focus more on the product and the output of an organization, but it does require more planning to handle testing and monitoring. Organizations that plan to use serverless should first build out a project map that helps them decide if they want to use a microservices architecture or rely on a single-function router to handle API requests. If executed correctly, a serverless architecture can save development teams time when they push out new features and can scale nearly infinitely. If developers skip advance planning and take precautions, it can lead to future problems.