AWS outage explained

Amazon Web Services (AWS) has explained the cause of last Wednesday’s widespread outage, which impacted thousands of third-party online services for several hours.

While dozens of AWS services were affected, AWS says the outage occurred in its Northern Virginia, US-East-1, region. It happened after a “small addition of capacity” to its front-end fleet of Kinesis servers.

Kinesis is used by developers, as well as other AWS services like CloudWatch and Cognito authentication, to capture data and video streams and run them through AWS machine-learning platforms. 

The Kinesis service’s front-end handles authentication, throttling, and distributes workloads to its back-end “workhorse” cluster via a database mechanism called sharding.

As AWS notes in a lengthy summary of the outage, the addition of capacity triggered the outage but wasn’t the root cause of it. AWS was adding capacity for an hour after 2:44am PST, and after that all the servers in Kinesis front-end fleet began to exceed the maximum number of threads allowed by its current operating system configuration.

The first alarm was triggered at 5:15am PST and AWS engineers spent the next five hours trying to resolve the issue. Kinesis was fully restored at 10:23pm PST.

By Liam Tung | November 30, 2020

Click to read the entire article on ZDNet

More Posts

December 30, 2024

Bob Ross will be having triple bypass surgery this morning. The surgery is expected to be completed around noon today.  I will update this post

December 25, 2024

On December 24, 2024, around 3 p.m. Bob Ross suffered a mini stroke and was taken to the emergency room. Currently he is feeling much