Join our Waitlist for Expert Advice!

When Amazon Makes a Mistake, Expect a Prompt, Overly Detailed Response The internet outage on Tuesday was caused by a programmer's inputting error.

By Stephen J. Bronner

Philippe Huguen | Getty Images

On Tuesday, many websites and services, including Medium, Slack and Business Insider, were either not working or working very slowly. It turns out, these websites, and many others, utilize Amazon's cloud storage service S3, which suffered "high error rates" that day.

As Wired points out, "the internet is actually pretty brittle."

To explain the outage, Amazon today issued a dense, technical statement. In it, the company says the issue stemmed from a programmer entering a command incorrectly. It's scary to think that's all it takes to bring down part of the internet. If you have the technical chops to understand it, here's the full statement:

We'd like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests. The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects. Removing a significant portion of the capacity caused each of these systems to require a full restart. While these subsystems were being restarted, S3 was unable to service requests. Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.

S3 subsystems are designed to support the removal or failure of significant capacity with little or no customer impact. We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes. While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected. The index subsystem was the first of the two affected subsystems that needed to be restarted. By 12:26PM PST, the index subsystem had activated enough capacity to begin servicing S3 GET, LIST, and DELETE requests. By 1:18PM PST, the index subsystem was fully recovered and GET, LIST, and DELETE APIs were functioning normally. The S3 PUT API also required the placement subsystem. The placement subsystem began recovery when the index subsystem was functional and finished recovery at 1:54PM PST. At this point, S3 was operating normally. Other AWS services that were impacted by this event began recovering. Some of these services had accumulated a backlog of work during the S3 disruption and required additional time to fully recover.

We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future. We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. We employ multiple techniques to allow our services to recover from any failure quickly. One of the most important involves breaking services into small partitions which we call cells. By factoring services into cells, engineering teams can assess and thoroughly test recovery processes of even the largest service or subsystem. As S3 has scaled, the team has done considerable work to refactor parts of the service into smaller cells to reduce blast radius and improve recovery. During this event, the recovery time of the index subsystem still took longer than we expected. The S3 team had planned further partitioning of the index subsystem later this year. We are reprioritizing that work to begin immediately.

From the beginning of this event until 11:37AM PST, we were unable to update the individual services' status on the AWS Service Health Dashboard (SHD) because of a dependency the SHD administration console has on Amazon S3. Instead, we used the AWS Twitter feed (@AWSCloud) and SHD banner text to communicate status until we were able to update the individual services' status on the SHD. We understand that the SHD provides important visibility to our customers during operational events and we have changed the SHD administration console to run across multiple AWS regions.

Finally, we want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

Stephen J. Bronner

Entrepreneur Staff

News Director

Stephen J. Bronner writes mostly about packaged foods. His weekly column is The Digest. He is very much on top of his email.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Science & Technology

Don't Wait For Disaster to Strike — These 5 Preventive Measures Can Protect Your Business From All Kinds of Risk

In a landscape where cyberattacks, regulatory fines and reputation threats loom large, proactive risk management is not just an option — it's essential for long-term success.

Business News

For Less than $50K You Can Start a BooXkeeping Franchise

A BooXkeeping franchise maximizes profitability by creating a network of small-business customers to make bookkeeping, financial data entry, reporting, and billing services as efficient and easy as possible.

Business News

You Have One Month Left to Buy a House, According to Barbara Corcoran. Here's Why.

"If you are planning on waiting a year and seeing where interest rates go, you are out of your mind," Corcoran said.

Business News

'Unfair and Unjust': JPMorgan CEO Jamie Dimon Says Federal Banking Regulations Are 'Hurting Companies'

Dimon said it is "time to fight back" against federal regulations at a conference on Monday.

Leadership

How NFL Star Quarterback Kyler Murray Shut Down Internet Trolls and Turned His Love of Gaming Into a 'Call of Duty' Partnership

The Arizona Cardinals quarterback explains how he silenced critics and teamed up with the Call of Duty Endowment program to pursue his passion.

Career

Personal Accountability Will Push You Forward and Help You Grow in Business and in Life — Here's How to Practice It

There is a startling lack of personal accountability today. Exercising personal accountability will push you forward both personally and professionally.