Minio is a high performance distributed object storage server, designed for large-scale private cloud infrastructure.
However, at its simplest, Minio allows you to expose a local filestructure via the Amazon S3 API. You could, for example, use it to provide access to "buckets" (folders) of data on your filestore, secured by access/secret keys, just like AWS S3. You can further interact with your "buckets" with common tools, just as if they were hosted on S3.
Under a more advanced configuration, Minio runs in distributed mode, with features including high-availability, mirroring, erasure-coding, and "bitrot detection".
- Sharing files (protected by user accounts with secrets) via HTTPS, either as read-only or read-write, in such a way that the bucket could be mounted to a remote filesystem using common S3-compatible tools, like goofys. Ever wanted to share a folder with friends, but didn't want to open additional firewall ports etc?
- Simulating S3 in a dev environment
- Mirroring an S3 bucket locally
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
Setup data locations¶
We'll need a directory to hold our minio file store, as well as our minio client config, so create a structure at /var/data/minio:
1 2 3
Create minio.env, and populate with the following variables
Setup Docker Swarm¶
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private "premix" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a
git pull and a
docker stack deploy 👍
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Launch Minio stack¶
Launch the Minio stack by running
docker stack deploy minio -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with the access key and secret key you specified in minio.env.
If you created
/var/data/minio, you'll see nothing. If you referenced existing data, you should see all subdirectories in your existing folder represented as buckets.
If all you need is single-user access to your data, you're done! 🎉
If, however, you want to expose data to multiple users, at different privilege levels, you'll need the minio client to create some users and (potentially) policies...
Setup minio client¶
To administer the Minio server, we need the Minio client. While it's possible to download the minio client and run it locally, it's just as easy to do it within a small (5Mb) container.
I created an alias on my docker nodes, allowing me to run mc quickly:
Now I use the alias to launch the client shell, and connect to my minio instance (I could also use the external, traefik-provided URL)
1 2 3 4 5 6 7
Add (readonly) user¶
Use mc to add a (readonly or readwrite) user, by running
mc admin user add minio <access key> <secret key> <access level>
1 2 3
Confirm by listing your users (admin is excluded from the list):
1 2 3
Make a bucket accessible to users¶
By default, all buckets have no "policies" attached to them, and so can only be accessed by the administrative user. Having created some readonly/read-write users above, you'll be wanting to grant them access to buckets.
The simplest permission scheme is "on or off". Either a bucket has a policy, or it doesn't. (I believe you can apply policies to subdirectories of buckets in a more advanced configuration)
After no policy, the most restrictive policy you can attach to a bucket is "download". This policy will allow authenticated users to download contents from the bucket. Apply the "download" policy to a bucket by running
mc policy download minio/<bucket name>, i.e.:
1 2 3
There are some clever complexities you can achieve with user/bucket policies, including:
- A public bucket, which requires no authentication to read or even write (for a public dropbox, for example)
- A special bucket, hidden from most users, but available to VIP users by application of a custom "canned policy"
Mount a minio share remotely¶
Having setup your buckets, users, and policies - you can give out your minio external URL, and user access keys to your remote users, and they can S3-mount your buckets, interacting with them based on their user policy (read-only or read/write)
I tested the S3 mount using goofys, "a high-performance, POSIX-ish Amazon S3 file system written in Go".
First, I created ~/.aws/credentials, as follows:
1 2 3
And then I ran (in the foreground, for debugging),
goofys --f -debug_s3 --debug_fuse --endpoint=https://traefik.example.com <bucketname> <local mount point>
To permanently mount an S3 bucket using goofys, I'd add something like this to /etc/fstab:
Chef's Notes 📓¶
- There are many S3-filesystem-mounting tools available, I just picked Goofys because it's simple. Google is your friend :)
- Some applications (like NextCloud) can natively mount S3 buckets
- Some backup tools (like Duplicity) can backup directly to S3 buckets
Tip your waiter (support me) 👏¶
Did you receive excellent service? Want to make your waiter happy? (..and support development of current and future recipes!) See the support page for (free or paid) ways to say thank you! 👏
Flirt with waiter (subscribe) 💌¶
Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated. (double-opt-in, no monkey business, no spam either - check the archive for proof!)