Maksim Samasiuk - Fotolia
Containers are a key ingredient for building an Agile, DevOps-oriented infrastructure, but reliable and scalable storage is just as important. To date, persistent storage for containers has been a challenge. The result is that persistent storage limitations have constrained the adoption of containers in the enterprise.
New storage solutions that adapt distributed storage systems for containerized infrastructure are helping to solve that challenge. And at the same time, older container storage tools like Docker Data Volumes can still be leveraged as an effective persistent storage solution, if used the right way.
Wondering how to identify and deploy the best persistent storage solution for your containerized environment? This article provides an overview of the current state of container storage, a description of the solutions available and an explanation of the advantages and limitations of each one.
The Persistent Storage Conundrum
Delivering persistent storage for containers is difficult because, by design, most containers themselves are both ephemeral and isolated. "Ephemeral," in this case, means the apps within the containers will spin up and shut down at unpredictable rates, rather than remaining constantly on. The isolation of containers makes it more difficult to transfer data between different containerized apps or between the host system and a container.
These characteristics are what help make containerized infrastructure Agile and modular. They also present challenges for persistent data storage, since it's hard to store and share data persistently inside environments that are themselves intermittent and isolated.
Container Storage Solutions
Existing solutions for the persistent storage conundrum fall into two main categories: data volumes and cluster file systems.
The first, data volumes, involves using containers themselves to store data. Docker Data Volumes, for example, allow developers to create a special directory inside a Docker container that is dedicated to data storage. Because the volume can be mapped to a directory on the container host's file system, which remains in place after an individual container spins down, the data storage can be persistent. Data volumes can also be shared between containers.
The second major approach is to use a cluster file system for data storage. Under this model, file systems are made available over the network and shared with containers. This is the approach behind CoreOS' new Torus file system, which uses etcd to expose storage to hosts as a network block device. It's also the model that Red Hat utilized for OpenShift, its container deployment platform. This model makes use of Gluster, a cluster file system it has used for several years within noncontainerized environments, for persistent container storage.
There are other ways to deliver persistent storage for containers. Apart from the ones outlined above, however, container strategies are often too simplistic to be useful in real-world settings.
Which Persistent Storage Solution is Right for You?
So which of these two options is the best fit for your needs?
The best way to answer that question is to identify the pros and cons of each approach:
Should You Containerize?
Of course, you might also come to the conclusion that the existing storage options for containers won't work for you at all. If that's the case, then you're not yet ready to migrate to containerized infrastructure.
That doesn't mean you'll be forever stuck in the world of legacy virtual servers. Persistent storage technology for containers remains in rapid development, and better solutions are on the way. There's nothing wrong with waiting for them to arrive if the existing options don't meet your needs.
Learn more about the container technology wave
A look at the tools and skills needed for containers
What does the container trend mean for PaaS?