Polaris on Kubernetes
Fairwinds' Polaris is an open-source policy agent, which helps you to ensure that your cluster aligns with best practices, in the areas of security, reliability, networking and efficiency.
- A Kubernetes cluster
- Flux deployment process bootstrapped
- An Ingress controller to route incoming traffic to services
- External DNS to create an DNS entry the "flux" way
We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the flux design, I create this example yaml in my flux repo at
apiVersion: v1 kind: Namespace metadata: name: polaris
We're going to install the Polaris helm chart from the fairwinds-stable repository, so I create the following in my flux repo (assuming it doesn't already exist):
apiVersion: source.toolkit.fluxcd.io/v1beta1 kind: HelmRepository metadata: name: fairwinds-stable namespace: flux-system spec: interval: 15m url: https://charts.fairwinds.com/stable
Now that the "global" elements of this deployment (just the HelmRepository in this case) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at
/polaris/. I create this example Kustomization in my flux repo:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: polaris namespace: flux-system spec: interval: 30m path: ./polaris prune: true # remove any elements later removed from the above path timeout: 10m # if not set, this defaults to interval duration, which is 1h sourceRef: kind: GitRepository name: flux-system healthChecks: - apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease name: polaris namespace: polaris
Fast-track your fluxing! 🚀
Is crafting all these YAMLs by hand too much of a PITA?
I automatically and instantly share (with my sponsors) a private "premix" git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!
Let the machines do the TOIL!
If, like me, you prefer to create your DNS records the "GitOps way" using ExternalDNS, create something like the following example to create a DNS entry for your Authentik ingress:
apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: "polaris.example.com" namespace: polaris spec: endpoints: - dnsName: "polaris.example.com" recordTTL: 180 recordType: CNAME targets: - "traefik-ingress.example.com"
Rather than creating individual A records for each host, I prefer to create one A record (
nginx-ingress.example.com in the example above), and then create individual CNAME records pointing to that A record.
Lastly, having set the scene above, we define the HelmRelease which will actually deploy polaris into the cluster. We start with a basic HelmRelease YAML, like this example:
apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease metadata: name: polaris namespace: polaris spec: chart: spec: chart: polaris version: 5.16.x # auto-update to semver bugfixes only (1) sourceRef: kind: HelmRepository name: fairwinds-stable namespace: flux-system interval: 15m timeout: 5m releaseName: polaris values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
- I like to set this to the semver minor version of the Polaris current helm chart, so that I'll inherit bug fixes but not any new features (since I'll need to manually update my values to accommodate new releases anyway)
- Paste the full contents of the upstream values.yaml here, indented 4 spaces under the
If we deploy this helmrelease as-is, we'll inherit every default from the upstream Polaris helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the Polaris helm chart's values.yaml, and to paste these (indented), under the
values key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
Why not put values in a separate ConfigMap?
Didn't you previously advise to put helm chart values into a separate ConfigMap?
Yes, I did. And in practice, I've changed my mind.
Why? Because having the helm values directly in the HelmRelease offers the following advantages:
- If you use the YAML extension in VSCode, you'll see a full path to the YAML elements, which can make grokking complex charts easier.
- When flux detects a change to a value in a HelmRelease, this forces an immediate reconciliation of the HelmRelease, as opposed to the ConfigMap solution, which requires waiting on the next scheduled reconciliation.
- Renovate can parse HelmRelease YAMLs and create PRs when they contain docker image references which can be updated.
- In practice, adapting a HelmRelease to match upstream chart changes is no different to adapting a ConfigMap, and so there's no real benefit to splitting the chart values into a separate ConfigMap, IMO.
Then work your way through the values you pasted, and change any which are specific to your configuration.
Initially you'll probably want to use the dashboard, without the webhook, so you'll want to at least enable the ingress for the dashboard (which, itself, is enabled by default):
values: dashboard: ingress: # dashboard.ingress.enabled -- Whether to enable ingress to the dashboard enabled: true
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using
flux reconcile source git flux-system. You should see the kustomization appear...
~ ❯ flux get kustomizations polaris NAME READY MESSAGE REVISION SUSPENDED polaris True Applied revision: main/70da637 main/70da637 False ~ ❯
The helmrelease should be reconciled...
~ ❯ flux get helmreleases -n polaris polaris NAME READY MESSAGE REVISION SUSPENDED polaris True Release reconciliation succeeded v5.16.x False ~ ❯
And you should have happy pods in the polaris namespace:
~ ❯ k get pods -n polaris -l app.kubernetes.io/name=polaris NAME READY STATUS RESTARTS AGE polaris-7c94b7446d-nwsss 1/1 Running 0 5m14s ~ ❯
Check your score
Browse to the URL you configured for your ingress above (it may take a while for the report to run), and confirm that Polaris is displaying your cluster overview / score.
Now pick yourself up off the floor.. some of the issues are false-positives!1
Improve your score
You may not care about some of the checks being applied. For example, you may using a single-node cluster, and so the HA / resilience checks may be immaterial to you.
Selectively disable checks
The Polaris Documentation explains how to change the
config section of
values.yaml. You can change the priority (ignore, warning, etc) of various checks. Here's how it looks on a cluster I manage:
config: checks: # reliability deploymentMissingReplicas: warning priorityClassNotSet: ignore tagNotSpecified: danger pullPolicyNotAlways: ignore readinessProbeMissing: warning livenessProbeMissing: ignore # Per https://github.com/zegl/kube-score/blob/master/README_PROBES.md, we don't _need_ liveness probes if we have readiness probes pdbDisruptionsIsZero: warning missingPodDisruptionBudget: ignore topologySpreadConstraint: ignore # we don't have complex topology <truncated>
You may want a check running, but want to ignore results from a particular namespace. This can be done using annotations on individual workloads, , but I prefer to codify these in Polaris' config, so that all my exemptions are stored (and versioned) in one place:
Exemptions are also configured under
config, as illustrated below:
config: exemptions: - namespace: kube-system controllerNames: - kube-apiserver - kube-proxy - kube-scheduler - etcd-manager-events - kube-controller-manager - kube-dns - etcd-manager-main rules: - hostPortSet - hostNetworkSet - readinessProbeMissing - cpuRequestsMissing - cpuLimitsMissing - memoryRequestsMissing - memoryLimitsMissing - runAsRootAllowed - runAsPrivileged - notReadOnlyRootFilesystem - hostPIDSet # Special case for Cilium - namespace: kube-system controllerNames: - cilium rules: - runAsRootAllowed
To exempt a controller from all checks via annotations, use the annotation
kubectl annotate deployment my-deployment polaris.fairwinds.com/exempt=true
To exempt a controller from a particular check via annotations, use an annotation in the form of
kubectl annotate deployment my-deployment polaris.fairwinds.com/cpuRequestsMissing-exempt=true
Here's an example from the clusterRolebinding used for authentik:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: oidc-group-admin-kube-apiserver annotations: polaris.fairwinds.com/clusterrolebindingPodExecAttach-exempt: "true" polaris.fairwinds.com/clusterrolebindingClusterAdmin-exempt: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: Group name: oidc:admin-kube-apiserver # for authentik - kind: Group name: admin-kube-apiserver # for weave-gitops
What have we achieved? We've deployed Polaris, which we can use for a point-in-time audit of our cluster's best-practice configuration, or even as a webhook to prevent "uncomplaint" configuration from being applied in the first place!
- Polaris deployend and ready to improve our cluster security, resiliency, and efficiency !
- Work your way through the legitimate issues highlighted by Polaris, make improvements, and refresh the report page, gradually working your way up to an A+ "smooth sailing" cluster!
Chef's notes 📓
You'd be surprised how many are not false positives though - especially workloads deployed from 3rd-party helm charts, which are usually defaulted to the most generally compatible configurations. ↩
Tip your waiter (sponsor) 👏
Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! 👏
Employ your chef (engage) 🤝
Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.
Learn more about working with me here.
Flirt with waiter (subscribe) 💌
Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.