Key Takeaways from «Designing a Highly Scalable System on AWS: From Concept to Execution»
Acerca del Autor: Jhon Robert Quintero H.
https://youtu.be/eRVGL_H95xI
Introduction:
I participated at the Encora DevWeek 2023, a public virtual event attended by leading software engineers from around the world, my presentation titled «Designing a Highly Scalable System on AWS: From Concept to Execution» shed light on essential lessons learned and best practices for designing scalable systems on Amazon Web Services (AWS). This blog post summarizes the key takeaways from my presentation, focusing on EC2 Auto Scaling and database best practices.
EC2 Auto Scaling Best Practices:
1. EC2 Instance Frequency:
Utilize On-Minute Metrics Frequency for AWS monitoring in EC2 auto scaling. While a five-minute frequency is available at no extra cost, it may result in scaling based on outdated metric data. By opting for the one-minute frequency, albeit incurring additional charges, you can ensure more timely scalability. It is crucial to configure this frequency at the EC2 instance level rather than in the Auto Scaling Group.
2. Auto Scaling Group Health Check:
By default, the health check type for an Auto Scaling group only verifies that the EC2 instance is running and checks for hardware or software issues. However, when you have enabled a Load Balancer, it is recommended to enable Elastic Load Balancing health checks as well. This check confirms whether the instance is healthy according to the load balancer’s report, ensuring its availability to handle requests. Additionally, adjusting the grace period for new instances to one minute can further enhance responsiveness.
3. Scaling Strategies for Auto Scaling Groups:
To scale your resources effectively, there are several strategies to consider:
- Scale manually: This basic method involves specifying changes in the maximum, minimum, or desired capacity of your Auto Scaling group manually.
- Scale based on a schedule: With this approach, scaling actions are automated according to predefined time and date parameters.
- Scale based on demand: Dynamic scaling enables you to define policies that automatically adjust your Auto Scaling group’s capacity in response to changes in demand, such as CPU utilization.
- Use predictive scaling: By combining predictive scaling and dynamic scaling, you can proactively and reactively scale your EC2 capacity to accommodate anticipated traffic patterns. This approach is especially useful when demand exhibits recurring patterns or when EC2 instances require additional time to start. Predictive scaling requires 24 hours of data for accurate forecasting.
4. Predictive Scaling Forecast:
Predictive scaling, a feature in AWS Auto Scaling, leverages historical data to forecast traffic patterns and scales the capacity of your Auto Scaling group in advance. This approach is particularly valuable when demand changes rapidly with recurring patterns or when instances need more time to start. Remember to enable the predictive scaling toggle button to initiate the forecasting process. It is recommended to allow 24 hours for predictive scaling to learn and make accurate predictions before scaling operations commence.
5. Amazon SNS Notifications for Auto Scaling Groups:
Configuring Amazon Simple Notification Service (SNS) notifications for your Auto Scaling groups is vital to stay informed about critical events and changes in your environment. Without notifications, you risk missing essential information such as security threats or sudden increases in instance count. By setting up real-time alerts for changes in instance capacity, failures, or scaling events, you can promptly respond to anomalies and ensure optimal system performance and security.
Database Best Practices:
1. RDS vs. Aurora:
When considering database choices on AWS, it is essential to evaluate the trade-offs between Amazon RDS and Amazon Aurora. RDS allows provisioning of up to 5 replicas, but its replication process is comparatively slower than Aurora. On the other hand, Aurora supports up to 15 replicas and achieves near-instantaneous replication. This faster replication capability of Aurora enables rapid scaling of the database infrastructure.
2. Replication + Sharding:
When dealing with large workloads and data sizes that exceed the performance limits of a single MySQL table (AWS 16 TiB), implementing database sharding can be a viable solution. Sharding involves dividing the database into smaller fragments, called shards, and distributing the data across them. Each shard functions as an independent database residing on different servers.
By distributing the workload among multiple shards, sharding enhances scalability and performance. Each shard handles a specific portion of the workload, allowing for faster and more efficient execution of queries and transactions. However, it’s crucial to note that sharding introduces complexity to the application and requires careful design and coordination between shards.
Conclusion:
Designing a highly scalable system on AWS requires understanding best practices for EC2 Auto Scaling and database management. There are valuable insights into optimizing EC2 Auto Scaling through frequency settings, health checks, scaling strategies, predictive scaling, and the importance of Amazon SNS notifications. Additionally, we explored the trade-offs between RDS and Aurora and learned how replication and sharding can overcome performance limitations for large workloads and data sizes. Implementing these lessons learned and best practices will help software engineers design and execute highly scalable systems on AWS.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-health-checks.html