Recently, I was able to be part of a fully-fledged AWS serverless project implementation at 1 Billion Tech. In this project, we used Serverless Framework  as the primary deployment framework. Here are some of the best practices that were practiced while engaging in the project.
- Use serverless plugins. Do not reinvent the wheel.
While you are developing your serverless application, you may come across certain features that a serverless framework  does not provide natively. For example, let’s say you want to set up your serverless application in your local environment (your development PC). Since this is not supported natively by the serverless framework, you may think you need to write certain scripts to emulate and setup the serverless application in your local development environment.
This task can take a significant amount of your development time. But, if you search serverless framework plugin library , you can find a plugin called serverless offline .This plugin emulates AWS Lambda and API Gateway on your local machine. In addition, there is a plugin called serverless-dynamodb-local , which allows you to run DynamoDB locally. So, if you combine these two plugins, you can setup most of your serverless app components locally.
There are hundreds of other serverless plugins in the serverless plugin library . Therefore, whenever you come across a task or requirement that serverless frameworks do not support natively, always check the serverless plugin library first, because there is a good chance that someone faced the same issue as you have and developed a plugin. In the event you can’t find any suitable plugin to complete your task/requirement, and you decided to develop your own plugin, don’t forget to publish it in the serverless plugin library to help other people who are facing the same issue in the future.
Few serverless plugins that I found useful were:
- Serverless Webpack – allows you to use webpack with your serverless code
- Warmup – Reduces Lambda cold starts by warming up lambda functions
- Prune – Purges previous versions of the Lambda function
- Step Functions - Adds step function support to serverless framework
- Dotenv – Preloads environment variables into serverless from .env files
- Paying attention to the resource limit of the stack.
A serverless application consists of a serverless.yml and we need to include every AWS resource that should be created using the serverless framework into this serverless.yml file. When you deploy the serverless application using serverless deploy command, the framework will create a AWS CloudFormation stack based on the configuration you have added to serverless.yml. AWS CloudFormation stacks have a hard limit of 200 resources per stack.
You may think this 200 resource limit is hard to hit, but even if you are creating a microservice architecture-based application and you are creating a serverless application associated with each microservice, it is quite easy to hit this 200 resources limit. The simple solution to this problem is to create nested stacks. A nested stack is a child stack of a CloudFormation stack. One stack can have 200 nested stacks. In my experience the best strategy will be splitting the stack into resource type based nested stacks. For example, let’s say you have a stack that contains an API gateway, Lambda functions, and DynamoDB tables. You can group the resources based on the type and create an API gateway nested stack, Lambda nested stack and DynamoDB stack.
- Warm up your lambda functions if you want consistent predictable performance.
AWS lambda functions are super cheap compared to other AWS compute services like EC2 or ECS. You only have to pay for the running time of the Lambda function. How AWS handles this is by shutting down the Lambda container after a certain idle time (15-40 minutes). After AWS shuts down the Lambda container and if a new invocation happens, AWS will redeploy the Lambda container. This process is known as a cold start. In my experience, a cold start can vary between 400ms to 700ms.
Code package size of the Lambda function is a big factor in a cold start. Because AWS stores your Lambda function code in an encrypted S3 bucket. When deploying the Lambda container, AWS will copy your code from the S3 bucket and as a result of that, larger the code package size, more the time AWS will take to deploy the Lambda function.
Cold start can be an issue if you want consistent predictable performance out of your Lambda function. For example, let’s say you have a Lambda function that handles the login process, and you want this login process to be completed within 600ms. Let’s assume the cold start is around 400ms and it takes 500ms to complete the login process. So, with a cold start, the complete login process will take 900ms.
There are 2 ways to solve this:
- Create a scheduled Lambda function and invoke other Lambda functions that you want to keep warm. Inside Lambda functions that get invoked, write a small piece of code to ignore the invocation from scheduled the Lambda function.
- If you are using a serverless framework, you can use “serverless-plugin-warmup” plugin  to easily achieve this with less code.
- If you do not like the previous option because it feels a bit hackish, you can use the recently introduced provisioned concurrency feature. Using this feature, you can keep Lambda functions initialized and ready to invoke. The downside of this option is although it is easier than the first option, it’s also expensiver than the first option.
- Do not use wildcards for IAM Role configuration.
This is a very common mistake that most developers make. Most of the time developers use wildcards for either ‘action’, ‘effect’ or both. This violates the principle of least privilege. Always grant permission only for the required action and required effect. Also, wildcard IAM role configs will get highlighted in cloud security assessments. Most of the time this will be categorized as a high security risk. Always practice “principle of least privilege” from the beginning of the project. Otherwise, it will be significantly time consuming to go back and configure your old IAM role configs at the end of the project.
I hope you learned a few valuable practices from this article. Happy coding! :)