Amazon Aurora
Amazon Aurora [1] is a fully managed relational database engine introduced in October 2014. It was designed with speed and database performance reliability in mind. It is also easy to manage and cost-effective in nature. Initially, it was only compatible with MySQL 5.6, and thereafter PostgreSQL compatibility was also added. The Aurora engine is designed to be five (05) times faster than MySQL and three (03) times faster than PostgreSQL.
Amazon Aurora Serverless
Aurora serverless was released in 2018. With Aurora serverless, developers gained the ability to configure Aurora to automatically start up, shut down and scale down or scale up in terms of capacity, based on application needs. There are two versions of Aurora serverless (v1 and v2).
Aurora serverless v2 is the latest version and it is still in preview stage. Currently, Aurora serverless v2 only supports MySQL.
Benefits of Aurora Serverless
Aurora Capacity Units (ACUs)
When you create an RDS/Aurora database, you need to provision it irrespective of its usage. However, with Aurora serverless, you do not have to go through that process and provision instances. You have to only set minimum and maximum Aurora Capacity Units (ACUs). Each ACU is equivalent to a specific compute and memory configuration. Users are only charged for the ACUs that are actively being used and the allocated storage capacity. If no ACUs are used no charge will be applied.
Use Cases
Aurora serverless is suitable for applications that have unpredictable workloads. This is because of auto scaling. Also, if an application has infrequent workloads (used only few times a day), Aurora serverless will be a good choice because of its cost saving aspect.
Aurora serverless will be a perfect SQL solution for a serverless based application, because until now there were no good serverless SQL solutions in the market.
Pricing
The Amazon Aurora serverless capacity cost is calculated based on the Aurora Capacity Unit (ACU) running time. One (01) ACU has 2 GB of memory and a certain CPU power. The amount of CPU power allocated to a single ACU is not disclosed by AWS.
Serverless version | MySQL | PostgreSQL |
---|---|---|
V1 | $0.06 per ACU hour | $0.06 per ACU hour |
V2 | $0.12 per ACU hour |
Other than ACUs, there are certain other factors to be considered like storage cost, I/O cost, backup storage, and snapshot export cost. Please refer to the detailed Aurora pricing reference material [3] for more details.
How Auto scaling works
Aurora serverless is designed to scale ACUs up or down based on the current load and storage capacity based on the amount of data used by the Aurora database cluster. You can set 256 ACUs as the maximum capacity, which is around 488GB of memory. Database storage automatically scales from 10GB to 128TB.
Aurora serverless cluster will scale up if:
There is no cool down period for scale up.
Aurora serverless cluster will scale down if:
There is a 15-minute cool down period for scale down.
Using Aurora Serverless with Lambda Functions
You can only create an Aurora serverless cluster inside a Virtual Private Cloud (VPC) and you can only access the DB cluster within the VPC. This will be an issue if developers don’t want to put Lambda functions inside a VPC. Another issue is that since in a Lambda function there is no place to store the connection for reuse, the maximum connection limit of the Aurora serverless cluster can be reached very quickly.
Fortunately, AWS provides a mechanism to solve both these issues. This mechanism is called the Data API. You can choose to enable Data API when you create the database cluster. Instead of directly connecting to the database cluster, you can use Data API endpoint, which is a https endpoint backed by a connection pool. Data API will manage establishing a connection to the cluster and the connection pool. Also, using Data API, you can connect to the Aurora serverless database cluster even if your code is deployed outside of the DB cluster VPC. To use the Data API, you only need to have appropriate permission to the DB cluster and DB cluster secret, which is stored in the AWS secret manager.
If you are using NodeJS, there is a NPM library called Data API client, which makes using Data API even easier. This library converts input and response data into JavaScript types [4].
Limitations