Sometimes you discover something which doesn't fit neatly into the "recipe" format. That's what this category of blog posts is for. I note information I don't want to loose, but I don't know (yet) how to fit it into the structure of the cookbook.
authentik does an excellent job as an authentication provider using modern protocols like OIDC. Some applications (like Jellyfin or Harbor) won't support OIDC, but can be configured to use LDAP for authentication.
I recently migrated a Harbor instance from an OpenLDAP authentication backend to Authentik's LDAP outpost, and struggled a little with the configuration.
Now that it's working, I thought I'd document it here so that I don't forget!
While I've been a little distracted in the last few months assembling ElfHosted, the platform is now at a level of maturity which no longer requires huge amounts of my time1. I've started "back-porting" learnings from building an open-source, public, multi-tenanted platform back into the cookbook.
What is ElfHosted?
ElfHosted is "self-hosting as a service" (SHAAS? ) - Using our Kubernetes / GitOps designs, we've build infrastructure and automation to run popular self-hosted apps (think "Plex, Radarr, Mattermost..") and attach your own cloud storage ("bring-your-own-storage").
You get $10 free credit when you sign up, so you can play around without commitment!
We're building "in public", so follow the progress in the open-source repos, the blog or in Discord.
The first of our imported improvements covers how to ensure that you have a trusted backup of the config and state in your cluster. Using Velero, rook-ceph, and CSI snapshots, I'm able to snapshot TBs of user data in ElfHosted for the dreaded "incase-I-screw-it-up" disaster scenario.
Check out the Velero recipe for a detailed guide re applying the same to your cluster!
I've been working with a client on upgrading a Cilium v1.13 instance to v1.14.. and as usual, chaos ensued.. here's what you need to know before upgrading to Cilium v1.14...
I've just finished putting together a progress report ElfHosted for July 2023. The report details all the changes we went through during the months (more than I remember!), and summarizes our various metrics (CPU, Network, etc.)
What is ElfHosted?
ElfHosted is "self-hosting as a service" (SHAAS? ) - Using our Kubernetes / GitOps designs, we've build infrastructure and automation to run popular self-hosted apps (think "Plex, Radarr, Mattermost..") and attach your own cloud storage ("bring-your-own-storage").
You get $10 free credit when you sign up, so you can play around without commitment!
We're building "in public", so follow the progress in the open-source repos, the blog or in Discord.
I've worked way more than I expected, and the work has been harder than I expected, but I've immensely enjoyed the challenge of building something fast and in public.
What follows here are our recent changes, the current stats - time/money spent, revenue (haha), and lots of data / graphs re the current state of the platform.
I've consulted on the building and operation of an "appbox" platform over the past 2 year, and my client/partner has made the difficult decision to shut the platform down, partly due to increased datacenter power costs, and capital constraints.
So I've got two year's worth of hard-earned lessons and ideas re how to build a GitOps-powered app hosting platform, and a generous and loyal userbase - I don't want to lose either, and I've enjoyed the process of building out the platform, so I thought I'd document the process by setting up another platform, on a smaller scale (*but able to accommodate growth).
My beloved "Penguin Patrol" bot, which I use to give GitHub / Patreon / Ko-Fi supporters access to the premix repo, was deployed on a Kube 1.19 Digital Ocean cluster, 3 years ago. At the time, the Ingress API was at v1beta1.
Fast-forward to today, and several Kubernetes major version upgrades later (it's on 1.23 currently, and we're on Ingress v1), and I discovered that I was unable to upgrade the chart, since helm complained that the previous release referred to deprecated APIs.
Worse, helm wouldn't let me delete and re-install the release - because of those damned deprecated APIs!
I feed and water several Proxmox clusters, one of which was recently upgraded to PVE 7.3. This cluster runs VMs used to build a CI instance of a bare-metal Kubernetes cluster I support. Every day the CI cluster is automatically destroyed and rebuilt, to give assurance that our recent changes haven't introduced a failure which would prevent a re-install.
Since the PVE 7.3 upgrade, the CI cluster has been failing to build, because the out-of-cluster Vault instance we use to secure etcd secrets, failed to sync. After much debugging, I'd like to present a variation of a famous haiku1 to summarize the problem:
It's not MTU! There's no way it's MTU! It was MTU.
You've changed your CoreDNS deployment, haven't you? You're using a custom image, or an image digest, or you're using an admissionwebhook to mutate pods upon recreation?
Here's what it means, and how to work around it...