Skip to content

Kubernetes Dashboard (with OIDC token auth)

Kubernetes Dashboard is the polished, general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.

authentik login

Importantly, the Dashboard interacts with the kube-apiserver using the credentials you give it. While it's possible to just create a cluster-admin service account, and hard-code the necessary service account into Dashboard, this is far less secure, since you're effectively granting anyone with HTTP access to the dashboard full access to your cluster1.

There are several ways to pass a Kubernetes Dashboard token, this recipe will focus on the Authentication Header method, under which every request to the dashboard includes the Authorization: Bearer <token> header.

We'll utilize OAuth2 Proxy, integrated with our OIDC-enabled cluster, to achieve this seamlessly and securely.

Dashboard requirements

Ingredients

Already deployed:

Dashboard HelmRepository

We're going to install the Dashboard helm chart from the kubernetes-dashboard repository, so I create the following in my flux repo (assuming it doesn't already exist):

/bootstrap/helmrepositories/helmrepository-kubernetes-dashboard.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: kubernetes-dashboard
  namespace: flux-system
spec:
  interval: 15m
  url: https://kubernetes.github.io/dashboard/

Dashboard Kustomization

Now that the "global" elements of this deployment (just the HelmRepository in this case) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at /kubernetes-dashboard/. I create this example Kustomization in my flux repo:

/bootstrap/kustomizations/kustomization-kubernetes-dashboard.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: kubernetes-dashboard
  namespace: flux-system
spec:
  interval: 30m
  path: ./kubernetes-dashboard
  prune: true # remove any elements later removed from the above path
  timeout: 10m # if not set, this defaults to interval duration, which is 1h
  sourceRef:
    kind: GitRepository
    name: flux-system
  healthChecks:
    - apiVersion: helm.toolkit.fluxcd.io/v2beta1
      kind: HelmRelease
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard

Fast-track your fluxing! πŸš€

Is crafting all these YAMLs by hand too much of a PITA?

I automatically and instantly share (with my sponsors) a private "premix" git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!

Let the machines do the TOIL! πŸ‹οΈβ€β™‚οΈ

Dashboard DNSEndpoint

If, like me, you prefer to create your DNS records the "GitOps way" using ExternalDNS, create something like the following example to create a DNS entry for your Authentik ingress:

/kubernetes-dashboard/dnsendpoint-kubernetes-dashboard.example.com.yaml
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: "kubernetes-dashboard.example.com"
  namespace: kubernetes-dashboard
spec:
  endpoints:
  - dnsName: "kubernetes-dashboard.example.com"
    recordTTL: 180
    recordType: CNAME
    targets:
    - "traefik-ingress.example.com"  

Tip

Rather than creating individual A records for each host, I prefer to create one A record (nginx-ingress.example.com in the example above), and then create individual CNAME records pointing to that A record.

Dashboard HelmRelease

Lastly, having set the scene above, we define the HelmRelease which will actually deploy kubernetes-dashboard into the cluster. We start with a basic HelmRelease YAML, like this example:

/kubernetes-dashboard/helmrelease-kubernetes-dashboard.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  chart:
    spec:
      chart: kubernetes-dashboard
      version: 6.0.x # auto-update to semver bugfixes only (1)
      sourceRef:
        kind: HelmRepository
        name: kubernetes-dashboard
        namespace: flux-system
  interval: 15m
  timeout: 5m
  releaseName: kubernetes-dashboard
  values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
  1. I like to set this to the semver minor version of the Dashboard current helm chart, so that I'll inherit bug fixes but not any new features (since I'll need to manually update my values to accommodate new releases anyway)
  2. Paste the full contents of the upstream values.yaml here, indented 4 spaces under the values: key

If we deploy this helmrelease as-is, we'll inherit every default from the upstream Dashboard helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the Dashboard helm chart's values.yaml, and to paste these (indented), under the values key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.

Why not put values in a separate ConfigMap?

Didn't you previously advise to put helm chart values into a separate ConfigMap?

Yes, I did. And in practice, I've changed my mind.

Why? Because having the helm values directly in the HelmRelease offers the following advantages:

  1. If you use the YAML extension in VSCode, you'll see a full path to the YAML elements, which can make grokking complex charts easier.
  2. When flux detects a change to a value in a HelmRelease, this forces an immediate reconciliation of the HelmRelease, as opposed to the ConfigMap solution, which requires waiting on the next scheduled reconciliation.
  3. Renovate can parse HelmRelease YAMLs and create PRs when they contain docker image references which can be updated.
  4. In practice, adapting a HelmRelease to match upstream chart changes is no different to adapting a ConfigMap, and so there's no real benefit to splitting the chart values into a separate ConfigMap, IMO.

Then work your way through the values you pasted, and change any which are specific to your configuration.

Beware v3.0.0-alpha0

The Dasboard repo's master branch has already been updated to a (breaking) new architecture. Since we're not lunatics, we're going to use the latest stable 6.0.x instead! For this reason, take care to avoid the values.yaml in the repo, but use the link to artifacthub instead.

The following sections detail suggested changes to the values pasted into /kubernetes-dashboard/helmrelease-kubernetes-dashboard.yaml from the Dashboard helm chart's values.yaml. The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.

Enable insecure mode

Because we're using OAuth2 Proxy in front of Dashboard, the incoming request will be HTTP from Dashboard's perspective, rather than HTTPS. We're happy to permit this, so make at least the following change to ExtraArgs below:

extraArgs:
#   - --enable-skip-login
   - --enable-insecure-login

Install Dashboard!

Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using flux reconcile source git flux-system. You should see the kustomization appear...

~ ❯ flux get kustomizations kubernetes-dashboard
NAME        READY   MESSAGE                         REVISION        SUSPENDED
kubernetes-dashboard    True    Applied revision: main/70da637  main/70da637    False
~ ❯

The helmrelease should be reconciled...

~ ❯ flux get helmreleases -n kubernetes-dashboard kubernetes-dashboard
NAME        READY   MESSAGE                             REVISION    SUSPENDED
kubernetes-dashboard    True    Release reconciliation succeeded    v6.0.x      False
~ ❯

And you should have happy pods in the kubernetes-dashboard namespace:

~ ❯ k get pods -n kubernetes-dashboard -l app.kubernetes.io/name=kubernetes-dashboard
NAME                                  READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-7c94b7446d-nwsss   1/1     Running   0          5m14s
~ ❯

Is that all?

Feels too easy, doesn't it?

The reason is that all the hard work (ingress, OIDC authentication, etc) is all handled by OAuth2 Proxy, so provided that's been deployed and tested, you're good-to-go!

Browse to the URL you configured in your OAuth2 Proxy ingress, log into your OIDC provider, and your should be directed to your Kubernetes Dashboard UI, with all the privileges your authentication token gets you πŸ’ͺ

Summary

What have we achieved? We've got a dashboard for Kubernetes, dammit! That's amaaazing!

And even better, it doesn't rely on some hacky copy/pasting of tokens, or disabling security, but it uses our existing, trusted OIDC cluster auth. This also means that you can grant other users access to the dashboard with more restrictive (i.e., read-only access) privileges.

Summary

Created:

Chef's notes πŸ““


  1. Plus, you wouldn't be able to do tiered access in this scenario ↩

Tip your waiter (sponsor) πŸ‘

Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! πŸ‘

Employ your chef (engage) 🀝

Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.

Learn more about working with me here.

Flirt with waiter (subscribe) πŸ’Œ

Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.

Dashboard resources πŸ“

Your comments? πŸ’¬