by Jay Judkowitz
Real cloud storage lessons from the AWS outage
It’s been very interesting watching the online firestorm over Amazon’s EBS outage. I was not at the recent Interop show, but apparently, there was an entire panel discussion about it and then a twitter flame war between representatives from VMware and Amazon. Then there were countless articles and blogs, all of which focused on some questions of mild interest with obvious answers.
- Is EBS a good or bad service?
- Will this affect people’s move to public cloud?
The answers to this are pretty uncontroversial, in my opinion.
- EBS is a very well done solution to one of the hardest computer science problems out there – how do you construct an infinitely scalable storage service out of commodity disk for read/write transactional data with strict consistency. The fact that EBS has gone this long without an outage of this magnitude is a tribute to the AWS team. They clearly made some poor choices but they will surely fix those over time. However, few seem to be paying attention to what AWS has done correctly. The naysayers would be hard pressed to name a production service with EBS’ characteristics deployed at EBS’ scale.
- The best online comment on the topic of the impact to public cloud adoption likened this to an airline crash. Airplanes crash from time to time, and those crashes always make for sensational news. But, in terms of cost and safety, air travel remains the best way to travel long distances, so people forget the crash and keep flying. There is a segment of people who will be nervous for some time with public cloud for some data and apps, but the trend towards public cloud will continue as before.
While all of the uproar is quite entertaining, it is not useful. As an industry, we can be a bit more thoughtful than this. This blog post is an attempt to get to the real lessons of this incident, which center on the industry’s transition to scale out storage models for transactional storage in the cloud.
The scale out model is undoubtedly the right architecture for cloud storage in general, especially where eventual consistency is sufficient. It provides:
- A highly virtualized interface – it’s one big pool of storage where placement across even thousands of nodes is completely automated
- Great aggregate performance
- Flexibility in the face of arbitrary failures
- The ability to grow steadily in small increments of commodity parts as the cloud itself grows, rather than in massive chunks of proprietary equipment
But, when applied to transactional workloads, there are issues with scale out storage that (a) the vendor community still needs to work out and that (b) customers must be aware of and plan around when they take the leap.
So, why is scale-out storage for transactional workloads so hard? To be useful to clouds, the scale-out transactional storage system needs to have the following qualities with respect to traditional enterprise storage.
- Reliability: Cloud storage needs to be almost as reliable as enterprise storage – four or five 9’s is called for. In this blog, I’ll speak of local availability only, not cross site-DR – that’s a topic for another day.
- Cost: The cost of scale-out storage is expected to be considerably cheaper than enterprise storage. Keep in mind that a lot of the enterprise storage price is in software, services, and margin. For big cloud deals, storage vendors will negotiate down closer to the real cost of the system and/or provide leasing plans. So, a good scale-out deployment needs to actually be cost conscious and can’t just rely on the promise of commodity parts.
- Consistency: When using read/write transactional storage, a committed write must really be committed. Eventual consistency does not cut it. Any write must be truly safe as even a minute or less of data loss can be a fatal issue.
- Performance: The transactional performance for both reads and writes must be usable, even if somewhat lower than more expensive enterprise storage systems.
With a requirement for strict consistency, protection of storage against loss and inaccessibility comes from either RAID or synchronous replication across multiple enclosures. When you really trust an enclosure, like how people (reasonably or unreasonably) trust traditional enterprise storage, you can use RAID and minimize disk proliferation – something like 20% extra disk is a reasonable price to pay for your five 9’s. When you don’t trust the enclosure because it’s a cheaper commodity system, you start looking at RAID over the network or erasure codes. The challenge with this strategy is that performance can be abysmal, especially in degraded mode where each read requires too many network accesses and parity calculations. In order to (a) make sure your data is never lost due to a critical set of commodity disks and/or enclosures being lost before rebuilds can complete while (b) maintaining adequate performance, you start mirroring the data over the network multiple times, usually 3x in scale-out systems targeted at the enterprise. This drives up the actual cost – think TCO – disks, enclosures, power, cooling, footprint, etc… It is notable that the mirroring system was the mechanism that brought EBS to its knees during the outage. So, the scale-out vendor is always balancing cost, consistency, and reliability – you can get any two, but not all three at once.
Even when you achieve an acceptable balance of the first three considerations, performance can still be an issue due to economics. To keep costs low in the face of the relatively expensive replication system, there is a temptation to pack high-density storage very tightly reducing the effective IOPs per GB and making contention a significant problem. With all that is good about EBS, you often hear customers complaining about its performance, both in terms of maximum throughput and in terms of variability over time, even when there is no news-making outage.
All of this is just about the characteristics of the storage itself and does not deal with operational issues, which is where EBS really hit some issues. Again, we’re not talking about TB or PB of storage, we’re talking about operations approaching the exabyte scale.
All the operations need to be completely automated – provisioning, placement, and failure response. All this automation can be implemented in one of two places
- Independently on each storage node
- Through centralized controllers
The storage nodes in most scale out systems generally just store and serve data. When data is sent to them, they store it. When they get a read request, they serve it. When they are given a replication partner, they send their data over. People try to avoid putting too much logic in the storage nodes because (a) they want storage nodes to focus on streaming data and (b) if storage nodes were doing too much thinking, they’d all need to coordinate making for a potentially unsolvable peer to peer coordination problem. Therefore, most instruction comes from one or more control systems.
In general, storage provisioning and placement operations (for primary copies, initial replicas, and new replicas after a failure), as well as data lookups, are done through more centralized controllers. Some requirements here are as follows:
- With thousands and thousands of users, you can’t have a single control node. You need to have the control system itself scale-out to many, many nodes (though certainly less than the number of storage nodes).
- The control system needs to be even more accessible than the data. You never want a situation where the control service nodes all die, get confused, lose metadata, or are simply starved for resources as when that happens, all administrators and end users lose the ability to interact with storage system as a whole.
- The algorithms of the control system need to be very clever since they are controlling thousands and thousands of individual storage nodes, which generally obey even inappropriate and/or heavyweight commands faithfully.
If you read Amazon’s well written, open, and frank EBS post-mortem, you know how and where these guidelines were violated and where the EBS team will undoubtedly be placing their efforts to improve the service over time. But, for you, the cloud builder, here is what you need to talk about when you talk to a scale-out storage provider.
1) What tradeoffs were made between reliability, cost, and consistency? If you need strong consistency on transactional data, find out what the uptime guarantees are, and what the implications are for overall system cost. Dig deep into any uptime guarantees. Make sure you understand the assumptions regarding probability of individual failures and adjust those assumptions if they do not apply exactly in your datacenter.
2) What is the price per usable GB and per IOP? If you are building a big enough cloud and can negotiate a great price from an enterprise storage vendor, make sure the scale-out system is cost-competitive even though they will be using many more disks. Think about TCO – don’t forget about the power, cooling, and footprint costs that come along! This is not to say that scale-out is more expensive than traditional storage, or that you should not go for it if you don’t get the savings you hope for. But you should double check all the math and make sure the TCO is what you expect.
3) What is the performance of the transactional storage– both in a normal mode and in a degraded mode? Make sure that they are not assuming a lower spindle to IOPs ratio than is reasonable (like EBS) to give you a rosy picture on price. Assume your transactional storage will actually be accessed forcing you to increase spindles, use less dense storage, and/or have a really good caching/tiering/ILM story.
4) What are the assumptions of the storage system? The EBS design assumed that their redundant network would always be available and that there would never be a general loss of connectivity from all to all. Has your scale-out vendor designed for this eventuality and tested it at scale? What other datacenter assumptions are they making?
5) What happens with split-brain at scale? Traditional enterprise storage is very simple in this area. Local availability is handled inside a single chassis and DR is done with dedicated replication partnerships. It is inflexible and not responsive to changing conditions. Scale-out storage is way better in this regard, but if not done right, the flexibility of the scale-out system can backfire, just like in the EBS case where all nodes tried to re-establish replicas of all data at once.
6) Does the storage understand temporary vs. permanent outages? If so, what if something that appeared permanent turns out to be actually temporary? Can your storage system react to the return of service in a reasonable way, especially when the permanent failure response is very heavyweight? EBS, unlike traditional enterprise storage, kept re-mirroring to new nodes rather than simply sync’ing back up with old mirrors when they once again became accessible.
7) Can the control system guarantee access to users and administrators? In the EBS outage, the automated failure response overloaded the control service, which is what actually affected all users, even if they had properly replicated their data between availability zones.
8) Are your availability zones really isolated? In EBS, there was a shared resource between availability zones. This is what made the impact of the response to #7 so bad.
9) Does the automation know when to stop trying something? Once it was clear that no more space was to be had and that the control systems were not responsive, the automated re-protection kept going. Sometimes, like people, software needs to stop, take a breath and let the situation cool down. And even though this is cloud, when the storage is in this state, it’s best to have it ask for administrator intervention rather than continuing to try to do the impossible repeatedly.
10) Are failures graceful, even the unlikely ones? The EBS system had a corner case that crashed the nodes rather than failing an operation gracefully. In most software, you can get away with letting those corner cases go, but when approaching the exabyte scale, you can’t. Make sure your vendor has good software engineering practices here.
11) Are there good fail-safes? The EBS outage started to get better when the EBS admins were able to stop some of the communication and get out of the vicious cycle. Does your scale-out vendor have similar controls to allow you to manually stop heavyweight operations that you, as the cloud operator, determine need stopping for the sake of the cloud as a whole?
12) Are the requirements for the end customer documented? After the outage, Amazon put out some excellent documentation on building cloud applications that everyone should read. Does your scale-out system, due to performance or reliability tradeoffs, require end users to use the storage system in any specific and non-obvious ways? If so, make sure those are clearly documented so you can educate your end users.
While items 4-10 in this list derive from the EBS problems, this blog posting should not be seen as anti-EBS. With EBS, Amazon has created something unique in the industry, a massive read/write transactional storage system with strong consistency that can be operated by a reasonable sized IT staff. Its major outage was the first of this level of seriousness in years and the long-term affects have been quite minimal. The success of EBS has influenced the rise of a plethora of scale out storage startups that want to give you something EBS-like in your datacenter. It has also scared the traditional storage vendors on technology and pricing and pushed them to innovate in a way they’ve not done in a long time – see their recent product announcements and M&A activity. EBS is a great service that will only get better.
While EBS’ failure in this case was spectacular, in a way it was fortuitous for the cloud industry because it educates us on what to look for in storage vendors. Hopefully, the scale-out storage vendors have been paying attention as well. They can learn important lessons about operations at massive scale without needing to do a very expensive real-world QA and without causing an outage for a paying customer. These lessons should be the focus of our attention, not the drama.