Kubernetes' internal DNS / service-discovery means that every service is resolvable within the cluster. You can create a Wordpress pod with a database URL pointing to "mysql", and trust that it'll find the service named "mysql" in the same namespace. (Or "mysql.weirdothernamespace" if you prefer)
This super-handy DNS magic only works within the cluster though. When you wanted to connect to the hypothetical Wordpress service from outside of the cluster, you'd need to manually create a DNS entry pointing to the LoadBalancer IP of that service. While using wildcard DNS might make this a little easier, it's still too manual and not at all "gitopsy" enough!
ExternalDNS is a controller for Kubernetes which watches the objects you create (Services, Ingresses, etc), and configures External DNS providers (like CloudFlare, Route53, etc) accordingly. With External DNS, you can just deploy an ingress referencing "mywordywordpressblog.batman.com", and have that DNS entry autocreated on your provider within minutes 💪
External DNS Namespace
We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the flux design, I create this example yaml in my flux repo at
apiVersion: v1 kind: Namespace metadata: name: external-dns
External DNS HelmRepository
We're going to install the External DNS helm chart from the bitnami repository, so I create the following in my flux repo (assuming it doesn't already exist):
apiVersion: source.toolkit.fluxcd.io/v1beta1 kind: HelmRepository metadata: name: bitnami namespace: flux-system spec: interval: 15m url: https://charts.bitnami.com/bitnami
External DNS Kustomization
Now that the "global" elements of this deployment (just the HelmRepository in this case) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at
/external-dns/. I create this example Kustomization in my flux repo:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: external-dns namespace: flux-system spec: interval: 30m path: ./external-dns prune: true # remove any elements later removed from the above path timeout: 10m # if not set, this defaults to interval duration, which is 1h sourceRef: kind: GitRepository name: flux-system healthChecks: - apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease name: external-dns namespace: external-dns
Fast-track your fluxing! 🚀
Is crafting all these YAMLs by hand too much of a PITA?
I automatically and instantly share (with my sponsors) a private "premix" git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!
Let the machines do the TOIL!
External DNS HelmRelease
Lastly, having set the scene above, we define the HelmRelease which will actually deploy external-dns into the cluster. We start with a basic HelmRelease YAML, like this example:
apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease metadata: name: external-dns namespace: external-dns spec: chart: spec: chart: external-dns version: 5.1.x # auto-update to semver bugfixes only (1) sourceRef: kind: HelmRepository name: bitnami namespace: flux-system interval: 15m timeout: 5m releaseName: external-dns values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
- I like to set this to the semver minor version of the External DNS current helm chart, so that I'll inherit bug fixes but not any new features (since I'll need to manually update my values to accommodate new releases anyway)
- Paste the full contents of the upstream values.yaml here, indented 4 spaces under the
If we deploy this helmrelease as-is, we'll inherit every default from the upstream External DNS helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the External DNS helm chart's values.yaml, and to paste these (indented), under the
values key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
Why not put values in a separate ConfigMap?
Didn't you previously advise to put helm chart values into a separate ConfigMap?
Yes, I did. And in practice, I've changed my mind.
Why? Because having the helm values directly in the HelmRelease offers the following advantages:
- If you use the YAML extension in VSCode, you'll see a full path to the YAML elements, which can make grokking complex charts easier.
- When flux detects a change to a value in a HelmRelease, this forces an immediate reconciliation of the HelmRelease, as opposed to the ConfigMap solution, which requires waiting on the next scheduled reconciliation.
- Renovate can parse HelmRelease YAMLs and create PRs when they contain docker image references which can be updated.
- In practice, adapting a HelmRelease to match upstream chart changes is no different to adapting a ConfigMap, and so there's no real benefit to splitting the chart values into a separate ConfigMap, IMO.
Then work your way through the values you pasted, and change any which are specific to your configuration.
By default, the helm chart doesn't install the DNSEndpoint CRD.
If you intend to use CRDs, enable it in the HelmRelease like the example below:
crd: ## @param crd.create Install and use the integrated DNSEndpoint CRD ## create: true
I recommend changing:
sources: # - crd - service - ingress # - contour-httpproxy
sources: - crd # - service # - ingress # - contour-httpproxy
Why only use CRDs as a source?
I thought the whole point of this magic was to create DNS entries from services or ingresses!
You can do that, yes. However, I prefer to be prescriptive, and explicitly decide when a DNS entry will be created. By using CRDs (External DNS creates a new type of resource called a "DNSEndpoint"), I add my DNS entries as YAML files into each kustomization, and I can still employ wildcard DNS where appropriate.
As you work your way through
values.yaml, you'll notice that it contains specific placeholders for credentials for various DNS providers.
Take for example, this config for cloudflare:
cloudflare: ## @param cloudflare.apiToken When using the Cloudflare provider, `CF_API_TOKEN` to set (optional) ## apiToken: "" ## @param cloudflare.apiKey When using the Cloudflare provider, `CF_API_KEY` to set (optional) ## apiKey: "" ## @param cloudflare.secretName When using the Cloudflare provider, it's the name of the secret containing cloudflare_api_token or cloudflare_api_key. ## This ignores cloudflare.apiToken, and cloudflare.apiKey ## secretName: "" ## @param cloudflare.email When using the Cloudflare provider, `CF_API_EMAIL` to set (optional). Needed when using CF_API_KEY ## email: "" ## @param cloudflare.proxied When using the Cloudflare provider, enable the proxy feature (DDOS protection, CDN...) (optional) ## proxied: true
In the case of CloudFlare (and this may differ per-provider), you can either enter your credentials in cleartext (baaad idea, since we intend to commit these files into a repo), or you can reference a secret, which External DNS will expect to find in its namespace.
Thanks to Sealed Secrets, we have a safe way of committing secrets into our repository, so to create this cloudflare secret, you'd run something like this:
kubectl create secret generic cloudflare-api-token \ --namespace external-dns \ --dry-run=client \ --from-literal=cloudflare_api_token=gobbledegook -o json \ | kubeseal --cert <path to public cert> \ > <path to repo>/external-dns/sealedsecret-cloudflare-api-token.yaml
And your sealed secret would end up in
Install External DNS!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using
flux reconcile source git flux-system. You should see the kustomization appear...
~ ❯ flux get kustomizations external-dns NAME READY MESSAGE REVISION SUSPENDED external-dns True Applied revision: main/70da637 main/70da637 False ~ ❯
The helmrelease should be reconciled...
~ ❯ flux get helmreleases -n external-dns external-dns NAME READY MESSAGE REVISION SUSPENDED external-dns True Release reconciliation succeeded v5.1.x False ~ ❯
And you should have happy pods in the external-dns namespace:
~ ❯ k get pods -n external-dns -l app.kubernetes.io/name=external-dns NAME READY STATUS RESTARTS AGE external-dns-7c94b7446d-nwsss 1/1 Running 0 5m14s ~ ❯
If you're the sort of person who doesn't like to just leak1 every service/ingress name into public DNS, you may prefer to manage your DNS entries using CRDs.
You can instruct ExternalDNS to create any DNS entry you please, using a DNSEndpoint resource, and place these in the appropriate folder in your flux repo to be deployed with your HelmRelease:
apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: batcave.example.com namespace: batcave spec: endpoints: - dnsName: batcave.example.com recordTTL: 180 recordType: A targets: - 192.168.99.216
You can even create wildcard DNS entries, for example by setting
Finally, (and this is how I prefer to manage mine), you can create a few A records for "permanent" endpoints stuff like Ingresses, and then point arbitrary DNS names to these records, like this:
apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: "robinsroost.example.com" namespace: batcave spec: endpoints: - dnsName: "robinsroost.example.com" recordTTL: 180 recordType: CNAME targets: - "batcave.example.com"
If DNS entries aren't created as you expect, then the best approach is to check the external-dns logs, by running
kubectl logs -n external-dns -l app.kubernetes.io/name=external-dns.
What have we achieved? By simply creating another YAML in our flux repo alongside our app HelmReleases, we can record and create the necessary DNS entries, without fiddly manual intervetion!
- DNS records are created automatically based on YAMLs (or even just on services and ingresses!)
Chef's notes 📓
Why yes, I have accidentally caused outages / conflicts by "leaking" DNS entries automatically! ↩
Tip your waiter (sponsor) 👏
Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! 👏
Employ your chef (engage) 🤝
Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.
Learn more about working with me here.
Flirt with waiter (subscribe) 💌
Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.