Skip to content

Notes

Sometimes you discover something which doesn't fit neatly into the "recipe" format. That's what this category of blog posts is for. I note information I don't want to loose, but I don't know (yet) how to fit it into the structure of the cookbook.

Authenticate Harbor with Authentik LDAP outpost

authentik does an excellent job as an authentication provider using modern protocols like OIDC. Some applications (like Jellyfin or Harbor) won't support OIDC, but can be configured to use LDAP for authentication.

I recently migrated a Harbor instance from an OpenLDAP authentication backend to Authentik's LDAP outpost, and struggled a little with the configuration.

Now that it's working, I thought I'd document it here so that I don't forget!

Cover your bare (metal) ass with Velero Backups

While I've been a little distracted in the last few months assembling ElfHosted, the platform is now at a level of maturity which no longer requires huge amounts of my time1. I've started "back-porting" learnings from building an open-source, public, multi-tenanted platform back into the cookbook.

What is ElfHosted? 🧝

ElfHosted is "self-hosting as a service" (SHAAS? ) - Using our Kubernetes / GitOps designs, we've build infrastructure and automation to run popular self-hosted apps (think "Plex, Radarr, Mattermost..") and attach your own cloud storage ("bring-your-own-storage").

You get $10 free credit when you sign up, so you can play around without commitment!

We're building "in public", so follow the progress in the open-source repos, the blog or in Discord.

TL;DR? Here's a guide to getting started, and another to migrating from another provider.

The first of our imported improvements covers how to ensure that you have a trusted backup of the config and state in your cluster. Using Velero, rook-ceph, and CSI snapshots, I'm able to snapshot TBs of user data in ElfHosted for the dreaded "incase-I-screw-it-up" disaster scenario.

Check out the Velero recipe for a detailed guide re applying the same to your cluster!

ElfDisclosure for July 2023 : GitOps-based SaaS now Open Source

I've just finished putting together a progress report ElfHosted for July 2023. The report details all the changes we went through during the months (more than I remember!), and summarizes our various metrics (CPU, Network, etc.)

What is ElfHosted? 🧝

ElfHosted is "self-hosting as a service" (SHAAS? ) - Using our Kubernetes / GitOps designs, we've build infrastructure and automation to run popular self-hosted apps (think "Plex, Radarr, Mattermost..") and attach your own cloud storage ("bring-your-own-storage").

You get $10 free credit when you sign up, so you can play around without commitment!

We're building "in public", so follow the progress in the open-source repos, the blog or in Discord.

TL;DR? Here's a guide to getting started, and another to migrating from another provider.

Of particular note here is that the GitOps and helm chart repos which power a production, HA SaaS, are now fully open-sourced!

(Oh, and we generated actual revenue during July 2023!)

Here's a high-level summary:

"Elf-Disclosure" for June 2023

It's been a month since ElfHosted was born! 👶

I've worked way more than I expected, and the work has been harder than I expected, but I've immensely enjoyed the challenge of building something fast and in public.

What follows here are our recent changes, the current stats - time/money spent, revenue (haha), and lots of data / graphs re the current state of the platform.

Introduction to ElfHosted

I've consulted on the building and operation of an "appbox" platform over the past 2 year, and my client/partner has made the difficult decision to shut the platform down, partly due to increased datacenter power costs, and capital constraints.

So I've got two year's worth of hard-earned lessons and ideas re how to build a GitOps-powered app hosting platform, and a generous and loyal userbase - I don't want to lose either, and I've enjoyed the process of building out the platform, so I thought I'd document the process by setting up another platform, on a smaller scale (*but able to accommodate growth).

When helm says "no" (failed to delete release)

My beloved "Penguin Patrol" bot, which I use to give GitHub / Patreon / Ko-Fi supporters access to the premix repo, was deployed on a Kube 1.19 Digital Ocean cluster, 3 years ago. At the time, the Ingress API was at v1beta1.

Fast-forward to today, and several Kubernetes major version upgrades later (it's on 1.23 currently, and we're on Ingress v1), and I discovered that I was unable to upgrade the chart, since helm complained that the previous release referred to deprecated APIs.

Worse, helm wouldn't let me delete and re-install the release - because of those damned deprecated APIs!

Here's how I fixed it...

That time when a Proxmox upgrade silently capped my MTU

I feed and water several Proxmox clusters, one of which was recently upgraded to PVE 7.3. This cluster runs VMs used to build a CI instance of a bare-metal Kubernetes cluster I support. Every day the CI cluster is automatically destroyed and rebuilt, to give assurance that our recent changes haven't introduced a failure which would prevent a re-install.

Since the PVE 7.3 upgrade, the CI cluster has been failing to build, because the out-of-cluster Vault instance we use to secure etcd secrets, failed to sync. After much debugging, I'd like to present a variation of a famous haiku1 to summarize the problem:

It's not MTU!
There's no way it's MTU!
It was MTU.

Here's how it went down...

Made changes to your CoreDNS deployment / images? You may find kubeadm uncooperative..

Are you trying to join a new control-plane node to a kubeadm-installed cluster, and seeing an error like this?

start version '8916c89e1538ea3941b58847e448a2c6d940c01b8e716b20423d2d8b189d3972' not supported
unable to get list of changes to the configuration.
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.isCoreDNSConfigMapMigrationRequired

You've changed your CoreDNS deployment, haven't you? You're using a custom image, or an image digest, or you're using an admissionwebhook to mutate pods upon recreation?

Here's what it means, and how to work around it...