Kanboard is one of my sponsored projects - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob™, and to manage my overflowing, overly-optimistic personal commitments! 😓
- Visualize your work
- Limit your work in progress to be more efficient
- Customize your boards according to your business activities
- Multiple projects with the ability to drag and drop tasks
- Reports and analytics
- Fast and simple to use
- Access from anywhere with a modern browser
- Plugins and integrations with external services
- Free, open source and self-hosted
- Super simple installation
- A Kubernetes Cluster including Traefik Ingress
- A DNS name for your kanboard instance (kanboard.example.com, below) pointing to your load balancer, fronting your Traefik ingress
Prepare traefik for namespace¶
When you deployed Traefik via the helm chart, you would have customized
values.yml for your deployment. In
values.yml is a list of namespaces which Traefik is permitted to access. Update
values.yml to include the kanboard namespace, as illustrated below:
1 2 3 4 5 6 7 8
If you've updated
values.yml, upgrade your traefik deployment via helm, by running
helm upgrade --values values.yml traefik stable/traefik --recreate-pods
Create data locations¶
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the kanboard stack with the following .yml:
1 2 3 4 5 6 7
Create persistent volume claim¶
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the kanboard app and plugin data:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
What's that annotation about?
The annotation is used by k8s-snapshots to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
Kanboard's configuration is all contained within
config.php, which needs to be presented to the container. We could maintain
config.php in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
Instead, we'll create
config.php as a ConfigMap, meaning it "lives" within the Kuberetes cluster and can be presented to our pod. When we want to make changes, we simply update the ConfigMap (delete and recreate, to be accurate), and relaunch the pod.
At the very least, I'd suggest making the following changes:
Now create the configmap from config.php, by running
kubectl create configmap -n kanboard kanboard-config --from-file=config.php
Create a deployment to tell Kubernetes about the desired state of the pod (which it will then attempt to maintain). Note below that we mount the persistent volume twice, to both
/var/www/app/plugins, using the subPath value to differentiate them. This trick avoids us having to provision two persistent volumes just for data mounted in 2 separate locations.
I share (with my patreon patrons) a private "premix" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a
git pull and a
kubectl create -f *.yml 👍
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
Check that your deployment is running, with
kubectl get pods -n kanboard. After a minute or so, you should see a "Running" pod, as illustrated below:
1 2 3 4
The service resource "advertises" the availability of TCP port 80 in your pod, to the rest of the cluster (constrained within your namespace). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Check that your service is deployed, with
kubectl get services -n kanboard. You should see something like this:
1 2 3 4
The ingress resource tells Traefik what to forward inbound requests for kanboard.example.com to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Check that your service is deployed, with
kubectl get ingress -n kanboard. You should see something like this:
1 2 3 4
At this point, you should be able to access your instance on your chosen DNS name (i.e. https://kanboard.example.com)
config.php is a ConfigMap now, to update it, make your local changes, and then delete and recreate the ConfigMap, by running:
Then, in the absense of any other changes to the deployement definition, force the pod to restart by issuing a "null patch", as follows:
To look at the Kanboard pod's logs, run
kubectl logs -n kanboard <name of pod per above> -f. For further troubleshooting hints, see Troubleshooting.
- The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
Tip your waiter (support me) 👏¶
Did you receive excellent service? Want to make your waiter happy? (..and support development of current and future recipes!) See the support page for (free or paid) ways to say thank you! 👏