While Docker Swarm is great for keeping containers running (and restarting those that fail), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (hint: you do!), you need to provide shared storage to every docker node.
Warning
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of Ceph for shared storage instead. - 2019 Chef
Design
Why GlusterFS?
This GlusterFS recipe was my original design for shared storage, but I found it to be flawed, and I replaced it with a design which employs Ceph instead. This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
Ingredients
Ingredients
3 x Virtual Machines (configured earlier), each with:
CentOS/Fedora Atomic
At least 1GB RAM
At least 20GB disk space (but it'll be tight)
Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
A second disk, or adequate space on the primary disk for a dedicated data partition
Preparation
Create Gluster "bricks"
To build our Gluster volume, we need 2 out of the 3 VMs to provide one "brick". The bricks will be used to create the replicated volume. Assuming a replica count of 2 (i.e., 2 copies of the data are kept in gluster), our total number of bricks must be divisible by our replica count. (I.e., you can't have 3 bricks if you want 2 replicas. You can have 4 though - We have to have minimum 3 swarm manager nodes for fault-tolerance, but only 2 of those nodes need to run as gluster servers.)
On each host, run a variation following to create your bricks, adjusted for the path to your disk.
The example below assumes /dev/vdb is dedicated to the gluster volume
(echoo# Create a new empty DOS partition tableechon# Add a new partitionechop# Primary partitionecho1# Partition numberecho# First sector (Accept default: 1)echo# Last sector (Accept default: varies)echow# Write changes)|sudofdisk/dev/vdb
mkfs.xfs-isize=512/dev/vdb1
mkdir-p/var/no-direct-write-here/brick1
echo''>>/etc/fstab>>/etc/fstab
echo'# Mount /dev/vdb1 so that it can be used as a glusterfs volume'>>/etc/fstab
echo'/dev/vdb1 /var/no-direct-write-here/brick1 xfs defaults 1 2'>>/etc/fstab
mount-a&&mount
Don't provision all your LVM space
Atomic uses LVM to store docker data, and automatically grows Docker's volumes as requried. If you commit all your free LVM space to your brick, you'll quickly find (as I did) that docker will start to fail with error messages about insufficient space. If you're going to slice off a portion of your LVM space in /dev/atomicos, make sure you leave enough space for Docker storage, where "enough" depends on how much you plan to pull images, make volumes, etc. I ate through 20GB very quickly doing development, so I ended up provisioning 50GB for atomic alone, with a separate volume for the brick.
Create glusterfs container
Atomic doesn't include the Gluster server components. This means we'll have to run glusterd from within a container, with privileged access to the host. Although convoluted, I've come to prefer this design since it once again makes the OS "disposable", moving all the config into containers and code.
The volume is only present on the host you're shelled into though. To add the other hosts to the volume, run gluster peer probe <servername>. Don't probe host from itself.
From one other host, run docker exec -it glusterfs-server bash to shell into the gluster-server container, and run gluster peer probe <original server name> to update the name of the host which started the volume.
Mount gluster volume
On the host (i.e., outside of the container - type exit if you're still shelled in), create a mountpoint for the data, by running mkdir /var/data, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually mounted if there's a network / boot delay getting access to the gluster volume:
For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount:
echo-e"\n\n# Give GlusterFS 10s to start before \mounting\nsleep 10s && mount -a">>/etc/rc.local
systemctlenablerc-local.service
For non-gluster nodes, you'll need to replace $MYHOST above with the name of one of the gluster hosts (I haven't worked out how to make this fully HA yet)
Serving
After completing the above, you should have:
Persistent storage available to every node
Resiliency in the event of the failure of a single (gluster) node
Chef's notes 📓
Future enhancements to this recipe include: 1. Migration of shared storage from GlusterFS to Ceph 2. Correct the fact that volumes don't automount on boot ↩
Tip your waiter (sponsor) 👏
Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! 👏
Employ your chef (engage) 🤝
Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.
Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.