SwiftStack
Blog

Coming Soon – Storage Policies in OpenStack Swift

Recently a few members of the Swift community gathered in Oakland to talk about the ongoing storage policy and erasure code (EC) work. We had a good time and made some good progress. I want to take this opportunity to give the community an update on storage policies and erasure code work in Swift.

What are we working on?

OpenStack Swift provides very high durability, availability, and concurrency across the entire data set. These benefits are perfect for modern web and mobile use cases. But one other place where Swift is also commonly used is for backups. Backups are typically large, compressed objects and they are infrequently read once they have been written to the storage system. (At least you hope they are infrequently read!) Although Swift can be used for very cost-effective storage today, technology like erasure codes enable even lower storage costs for those users looking to storage larger objects.

When the Swift community started this erasure code work, SwiftStack blogged about it, and I recently presented on this topic at Linux Conf Australia

In order to build support for erasure codes into Swift, we realized that we needed a way to support general storage policies in a single, logical Swift cluster. A storage policy allows deployers and users to choose three things: what hardware data is on, how the data is stored across that hardware, and how Swift actually talks to the storage volume.

Let’s take each of those three parts in turn.

First, given the global set of hardware available in a single Swift cluster, choose which subset of hardware on which to store data. This can be done by geography (e.g. US-East vs EU vs APAC vs global) or by hardware properties (e.g. SATA vs SSDs). An obviously, the combination can give a lot of flexibility.

Second, given the subset of hardware being used to store the data, choose how to encode the data across that set of hardware. For example, perhaps you have 2-replica, 3-replica, or erasure code policies. Combining this with the hardware possibilities, you get e.g. US-East reduced redundancy, global triple replicas, and EU erasure coded.

Third, give the subset of hardware and how to store the data across that hardware, control how Swift talks to a particular storage volume. This may be optimized local file systems. This may be Gluster volumes. This may be non-POSIX volumes like Seagate’s new Kinetic drives. This may even be volume a driver paired with additional functionality as ZeroVM is doing.

As a community, we’ve been working on storage policies (with a goal of supporting erasure codes as an option for deployment) for many months. SwiftStack, Intel, Box, and Red Hat have all participated, and in order to accelerate the work, we met up in Oakland for a couple of days of hacking and design discussion.

How’s progress?

So how’s progress going on this set of work? I’m glad you asked.

First, we’ve already released the cleaned-up DiskFile abstraction in Swift 1.11. This allows deployers and vendors to implement custom functionality, and we’ve already seen this in use with GlusterFS and Seagate’s Kinetic platform. Work is underway on providing a similar abstraction for Swift’s account and container on-disk representation.

Second, Kevin (at Box) and Tushar (at Intel) have been working on PyECLib and ensuring that all necessary functionality and interfaces are there to support any erasure code algorithm that is desired. This library provides the standard interface for EC data in Swift. Intel has also released their own library to help accelerate erasure code operations on Intel hardware.

Finally, we’re nearly done with getting full multi-ring support into mainline Swift. We’ve been doing all of the multi-ring work on a feature branch in the Swift source repo. You can take this code today and run a test Swift cluster with multiple, replicated storage policies. We’ve got one last component to include in the multi-ring feature before it can be merged into mainline and used in production, but expect to see rapid development on this soon.

We’ve been using a couple of different tools to track the storage policy and erasure code work in Swift. First, our primary task tracker has been a Trello board we set up for this feature. We also have several high-level LaunchPad blueprints to track this in the wider OpenStack release process.

What’s next?

Onward and upward, of course. The first task is to get the multi-ring feature into Swift. This will allow deployers to create Swift clusters with multiple replication policies, and in-and-of-itself will enable many new use cases for Swift. We’re targeting OpenStack’s Icehouse release for this feature. When the multi-ring support is done, we’ll be able to add support for erasure code policies into Swift clusters. I’m expecting to see production-ready erasure code support in Swift a few months after the OpenStack Juno summit.

My vision for Swift is that everyone will use it, everyday. The new use cases enabled by storage policies and erasure codes helps us fulfill that vision. I’m excited by what’s coming. If you’d like to get involved in this work, we’d love to have your help. We’re in #openstack-swift on Freenode IRC every day. Stop by and get involved!

John Dickinson

John Dickinson

Director of Technology, SwiftStack
OpenStack Swift Project Technical Lead

Categories

OpenStack, OpenStack Swift, PlanetOpenstack, SwiftStack

ZeroVM Design Summit-Swift Integration Extending OpenStack Swift

Comments

© 2014 SwiftStack Inc.        San Francisco, CA         contact@swiftstack.com