Don't be like Cameron. Backup your stuff.
Restic is a backup program intended to be easy, fast, verifiable, secure, efficient, and free. Restic supports a range of backup targets, including local disk, SFTP, S3 (or compatible APIs like Minio), Backblaze B2, Azure, Google Cloud Storage, and zillions of others via rclone.
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use (or a wildcard), pointed to your keepalived IP
Setup data locations
We'll need a data location to bind-mount persistent config (an exclusion list) into our container, so create them as below:
mkdir -p /var/data/restic/ mkdir -p /var/data/config/restic echo /var/data/runtime >> /var/data/restic/restic.exclude
/var/data/restic/restic.exclude details which files / directories to exclude from the backup. Per our data layout, runtime data such as database files are stored in
/var/data/runtime/[recipe], and excluded from backups, since we can't safely backup/restore data-in-use. Databases should be backed up by taking dumps/snapshots, and backing up these dumps/snapshots instead.
Prepare Restic environment
/var/data/config/restic/restic-backup.env, and populate with the following variables:
# run on startup, otherwise just on cron RUN_ON_STARTUP=true # when to run (TZ ensures it runs when you expect it!) BACKUP_CRON=0 0 1 * * * TZ=Pacific/Auckland # restic backend/storage credentials # see https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables #AWS_ACCESS_KEY_ID=xxxxxxxx #AWS_SECRET_ACCESS_KEY=yyyyyyyyy #B2_ACCOUNT_ID=xxxxxxxx #B2_ACCOUNT_KEY=yyyyyyyyy # will initialise the repo on startup the first time (if not already initialised) # don't lose this password otherwise you WON'T be able to decrypt your backups! RESTIC_REPOSITORY=<repo_name> RESTIC_PASSWORD=<repo_password> # what to backup (excluding anything in restic.exclude) RESTIC_BACKUP_SOURCES=/data # define any args to pass to the backup operation (e.g. the exclude file) # see https://restic.readthedocs.io/en/stable/040_backup.html RESTIC_BACKUP_ARGS=--exclude-file /restic.exclude # define any args to pass to the forget operation (e.g. what snapshots to keep) # see https://restic.readthedocs.io/en/stable/060_forget.html RESTIC_FORGET_ARGS=--keep-daily 7 --keep-monthly 12
/var/data/config/restic/restic-prune.env, and populate with the following variables:
# run on startup, otherwise just on cron RUN_ON_STARTUP=false # when to run (TZ ensures it runs when you expect it!) PRUNE_CRON=0 0 4 * * * TZ=Pacific/Auckland # restic backend/storage credentials # see https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables #AWS_ACCESS_KEY_ID=xxxxxxxx #AWS_SECRET_ACCESS_KEY=yyyyyyyyy #B2_ACCOUNT_ID=xxxxxxxx #B2_ACCOUNT_KEY=yyyyyyyyy # will initialise the repo on startup the first time (if not already initialised) # don't lose this password otherwise you WON'T be able to decrypt your backups! RESTIC_REPOSITORY=<repo_name> RESTIC_PASSWORD=<repo_password> # prune will remove any *forgotten* snapshots, if there are some args you want # to pass to the prune operation define them here #RESTIC_PRUNE_ARGS=
Why create two separate .env files?
Although there's duplication involved, maintaining 2 files for the two services within the stack keeps it clean, and allows you to potentially alter the behaviour of one service without impacting the other in future
Restic Docker Swarm config
Create a docker swarm config file in docker-compose syntax (v3) in
/var/data/config/restic/restic.yml , something like this:
Fast-track with premix! 🚀
I automatically and instantly share (with my sponsors) a private "premix" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a
git pull and a
docker stack deploy 👍.
🚀 Update: Premix now includes an ansible playbook, so that sponsors can deploy an entire stack + recipes, with a single ansible command! (more here)
version: "3.2" services: backup: image: mazzolino/restic env_file: /var/data/config/restic/restic-backup.env hostname: docker volumes: - /var/data/restic/restic.exclude:/restic.exclude - /var/data:/data:ro deploy: labels: - "traefik.enabled=false" prune: image: mazzolino/restic env_file: /var/data/config/restic/restic-prune.env hostname: docker deploy: labels: - "traefik.enabled=false" networks: internal: driver: overlay ipam: config: - subnet: 172.16.56.0/24
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See my list here.
Launch Restic stack
Launch the Restic stack by running
docker stack deploy restic -c <path -to-docker-compose.yml>, and watch the logs by running
docker service logs restic_backup - you should see something like this:
root@raphael:~# docker service logs restic_backup -f restic_backup.1.9sii77j9jf0x@leonardo | Checking configured repository '<repo_name>' ... restic_backup.1.9sii77j9jf0x@leonardo | Fatal: unable to open config file: Stat: stat <repo_name>/config: no such file or directory restic_backup.1.9sii77j9jf0x@leonardo | Is there a repository at the following location? restic_backup.1.9sii77j9jf0x@leonardo | <repo_name> restic_backup.1.9sii77j9jf0x@leonardo | Could not access the configured repository. Trying to initialize (in case it has not been initialized yet) ... restic_backup.1.9sii77j9jf0x@leonardo | created restic repository 66ffec75f9 at <repo_name> restic_backup.1.9sii77j9jf0x@leonardo | restic_backup.1.9sii77j9jf0x@leonardo | Please note that knowledge of your password is required to access restic_backup.1.9sii77j9jf0x@leonardo | the repository. Losing your password means that your data is restic_backup.1.9sii77j9jf0x@leonardo | irrecoverably lost. restic_backup.1.9sii77j9jf0x@leonardo | Repository successfully initialized. restic_backup.1.9sii77j9jf0x@leonardo | restic_backup.1.9sii77j9jf0x@leonardo | restic_backup.1.9sii77j9jf0x@leonardo | Scheduling backup job according to cron expression. restic_backup.1.9sii77j9jf0x@leonardo | new cron: 0 0 1 * * * restic_backup.1.9sii77j9jf0x@leonardo | (0x50fac0,0xc0000cc000) restic_backup.1.9sii77j9jf0x@leonardo | Stopping restic_backup.1.9sii77j9jf0x@leonardo | Waiting restic_backup.1.9sii77j9jf0x@leonardo | Exiting
Of note above is "Repository successfully initialized" - this indicates that the repository credentials passed to Restic are correct, and Restic has the necessary access to create repositories.
Repeat after me : "It's not a backup unless you've tested a restore"
The simplest way to test your restore is to run the container once, using the variables you're already prepared, with custom arguments, as per the following example:
docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \ -v /tmp/restore:/restore mazzolino/restic restore latest --target /restore
In my example:
root@raphael:~# docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \ > -v /tmp/restore:/restore mazzolino/restic restore latest --target /restore Unable to find image 'mazzolino/restic:latest' locally latest: Pulling from mazzolino/restic Digest: sha256:cb827c4c5e63952f8d114c87432ff12d3409a0ba4bcb52f53885dca889b1cb6b Status: Downloaded newer image for mazzolino/restic:latest Checking configured repository 's3:s3.amazonaws.com/restic-geek-cookbook-premix.elpenguino.be' ... Repository found. repository c50738d1 opened successfully, password is correct restoring <Snapshot b5c50b19 of [/data] at 2020-06-24 23:54:27.92318041 +0000 UTC by root@docker> to /restore root@raphael:~#
Restoring a subset of data
The example above restores the entire
/var/data folder (minus any exclusions). To restore just a subset of data, add the
-i <regex> argument, i.e.
Chef's notes 📓
Tip your waiter (sponsor) 👏
Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! 👏
Employ your chef (engage) 🤝
Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.
Learn more about working with me here.
Flirt with waiter (subscribe) 💌
Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.