AWS US-EAST-1 region is having another bad day
Briefly

AWS US-EAST-1 region is having another bad day
"At 3:36 PM PDT on October 28 (10:36PM UTC), the cloud colossus advised customers that "Earlier today some EC2 launches within the use1-az2 Availability Zone (AZ) experienced increased latencies for EC2 instance launches." Amazon throttled some requests for EC2 resources, but said retrying a request should resolve the issue. Another impact of the incident created "task launch failure rates for [Elastic Container Service] ECS tasks for both EC2 and Fargate for a subset of customers in the US-EAST-1 Region.""
""EMR Serverless maintains a warm pool of ECS clusters to support customer requests, and some of these clusters are operating in the impacted ECS cells." Amazon said it was "actively working on refreshing these warm pools with healthy clusters" and that it had made progress "on recovering impacted ECS cells, but progress is not visible externally." "ECS has stopped new launches and tasks on the affected clusters. Some services (such as Glue) are observing recovery for error rates, but may still be experiencing increased latency,""
AWS's US-EAST-1 region experienced increased latencies for EC2 instance launches beginning Oct 28, with some EC2 requests throttled and retries advised. A subset of customers saw elevated ECS task launch failure rates for both EC2 and Fargate, and some container instances disconnected causing tasks to stop. EMR Serverless was impacted because warm pools of ECS clusters used by EMR included affected cells. AWS worked on refreshing warm pools with healthy clusters and recovering impacted ECS cells, and some services such as Glue showed error-rate recovery while still possibly experiencing increased latency.
Read at Theregister
Unable to calculate read time
[
|
]