Automatic DNS for Kubernetes Ingresses with ExternalDNS

ExternalDNS is a relatively new Kubernetes Incubator project that makes Ingresses and Services available via DNS. It currently supports AWS Route 53 and Google Cloud DNS. There are several similar tools available with varying features and capabilities like route53-kubernetes, Mate, and the DNS controller from Kops. While it is not there yet, the goal is for ExternalDNS to include all of the functionality of the other options by 1.0.

In this post, we will use ExternalDNS to automatically create DNS records for Ingress resources on AWS.

Deploying the Ingress Controller

An Ingress provides inbound internet access to Kubernetes Services running in your cluster. The Ingress consists of a set of rules, based on host names and paths, that define how requests are routed to a backend Service. In addition to an Ingress resource, there needs to be an Ingress controller running to actually handle the requests. There are several Ingress controller implementations available: GCE, Traefik, HAProxy, Rancher, and even a shiny, brand new AWS ALB-based controller. In this example, we are going to use the Nginx Ingress controller on AWS.

Deploying the nginx-ingress controller requires creating several Kubernetes resources. First, we need to deploy a default backend server. If a request arrives that does not match any of the Ingress rules, it will be routed to the default backend which will return a 404 response. The defaultbackend Deployment will be backed by a ClusterIP Service that listens on port 80.

The nginx-ingress controller itself requires three Kubernetes resources. The Deployment to run the controller, a ConfigMap to hold the controller’s configuration, and a backing Service. Since we are working with AWS, we will deploy a LoadBalancer Service. This will create an Elastic Load Balancer in front of the nginx-ingress controller. The architecture looks something like this:

     [ ELB ]
 [ nginx-ingress ]
   [ Services ]

We will deploy the nginx-ingress controller using the example manifests in the kubernetes/ingress repository.

kubectl apply -f

At the time of this writing, this deploys a beta version (0.9.0-beta.5) of the nginx-ingress controller. The 0.9.x release of the ingress controller is necessary in order to work with ExternalDNS.

Now that we’ve deployed our Ingress controller, we can move on to our DNS configuration.

ExternalDNS currently requires full access to a single managed zone in Route 53 — it will delete any records that are not managed by ExternalDNS.

Warning: do not use an existing zone containing important DNS records with ExternalDNS. You will lose records.

If you already have a domain registered in Route 53 that you can dedicate to use for ExternalDNS, feel free to use that. In this post, I will instead show how you can create a subdomain in its own isolated Route 53 hosted zone. I am assuming for the purposes of this post that the parent domain is also hosted in Route 53. However, it is possible to use a subdomain even if the parent domain is not hosted in Route 53. In the following examples, I have a domain named registered in Route 53 and I will be creating a new hosted zone for dedicated to ExternalDNS.

Here is a small script we can use to configure the zone for our subdomain. Note that it depends on the indispensable jq utility.


# create the hosted zone for the subdomain
aws route53 create-hosted-zone --name ${ZONE} --caller-reference "$ZONE-$(uuidgen)"

# capture the zone ID
export ZONE_ID=$(aws route53 list-hosted-zones | jq -r ".HostedZones[]|select(.Name == \"${ZONE}.\")|.Id")

# create a changeset template
cat >update-zone.template.json <<EOL
  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [{
    "Action": "UPSERT",
    "ResourceRecordSet": {
      "Name": "",
      "Type": "NS",
      "TTL": 300,
      "ResourceRecords": []

# generate the changeset for the parent zone
cat update-zone.template.json \
 | jq ".Changes[].ResourceRecordSet.Name=\"${ZONE}.\"" \
 | jq ".Changes[].ResourceRecordSet.ResourceRecords=$(aws route53 get-hosted-zone --id ${ZONE_ID} | jq ".DelegationSet.NameServers|[{\"Value\": .[]}]")" > update-zone.json

# create a NS record for the subdomain in the parent zone
aws route53 change-resource-record-sets \
  --hosted-zone-id $(aws route53 list-hosted-zones | jq -r ".HostedZones[] | select(.Name==\"$PARENT_ZONE.\") | .Id" | sed 's/\/hostedzone\///') \
  --change-batch file://update-zone.json

We are using the AWS CLI to manage our zones in this post but you are probably better off using tools like Terraform or CloudFormation to manage your zones. You can also use the AWS management console if you must.

IAM Permissions

ExternalDNS will require the necessary IAM permissions to view and manage your hosted zone. There are a few ways you can grant these permissions depending on how you build and manage your Kubernetes installation on AWS. If you are using Kops, you can add additional IAM policies to your nodes. If you require finer grained control, take a look at kube2iam. This is the policy I am using for ExternalDNS on my cluster:

    "Effect": "Allow",
    "Action": [
    "Resource": [
    "Effect": "Allow",
    "Action": [
    "Resource": [
    "Effect": "Allow",
    "Action": [
    "Resource": [

If you are following along, you will need to replace the <hosted-zone-id> in the first statement with the correct ID for your zone.

Deploy ExternalDNS

Here is an example Deployment manifest we can use to deploy ExternalDNS:

A few things to note:

  • ExternalDNS is still in beta. We are using v0.3.0-beta.0 in this example.
  • We are running it with both the service and ingress sources turned on. ExternalDNS can create DNS records for both Services and Ingresses. In this post, we are just working with Ingress resources but ExternalDNS should work with Services as well with this configuration.
  • You must tell ExternalDNS which domain to use. This is done with the --domain-filter argument. The Deployment is configured to read this domain from a ConfigMap that we will create in the next step.
  • We tell ExternalDNS that we are using Route 53 with the --provider=aws argument.

Now we can deploy ExternalDNS. Make sure you change the value of domain-filter in the create configmap command. And, note that it is important that the domain ends with a “.”.

# create the configmap containing your domain
kubectl create configmap external-dns

# deploy ExternalDNS
kubectl apply -f

At this point, ExternalDNS should be up, running, and ready to create DNS records from Ingress resources. Let’s see this work with the same example used in the ExternalDNS documentation for GKE.

You can use this manifest almost as-is but you do need to change the host rule in the Ingress resources to use your domain. This is what ExternalDNS will use to create the necessary DNS records. Download the file and update it with your domain name:

curl -SLO

After updating the host rule, we can deploy the demo application:

kubectl apply -f demo.yml

After a minute or two, you should see that ExternalDNS populates your zone with an ALIAS record that points to the ELB for the nginx-ingress controller you deployed earlier. You can check the logs to verify that things are working correctly or to troubleshoot if things are not:

$ kubectl logs -f $(kubectl get po -l app=external-dns -o name)
time="2017-05-04T11:20:39Z" level=info msg="config: &{Master: KubeConfig: Sources:[service ingress] Namespace: FqdnTemplate: Compatibility: Provider:aws GoogleProject: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 Debug:false}"
time="2017-05-04T11:20:39Z" level=info msg="Connected to cluster at"
time="2017-05-04T11:20:39Z" level=info msg="All records are already up to date"
time="2017-05-04T11:21:40Z" level=info msg="Changing records: CREATE {
  Action: "CREATE",
  ResourceRecordSet: {
    AliasTarget: {
      DNSName: "",
      EvaluateTargetHealth: true,
      HostedZoneId: "Z35SXDOTRQ7X7K"
    Name: "",
    Type: "A"
} ..."
time="2017-05-04T11:21:40Z" level=info msg="Changing records: CREATE {
  Action: "CREATE",
  ResourceRecordSet: {
    Name: "",
    ResourceRecords: [{
        Value: "\"heritage=external-dns,external-dns/owner=default\""
    TTL: 300,
    Type: "TXT"
} ..."
time="2017-05-04T11:21:40Z" level=info msg="Record in zone were successfully updated"
time="2017-05-04T11:22:40Z" level=info msg="All records are already up to date"
time="2017-05-04T11:23:40Z" level=info msg="All records are already up to date"
time="2017-05-04T11:24:40Z" level=info msg="All records are already up to date"

Assuming everything worked correctly, and allowing for propagation time, you should now be able to access the demo application through its dynamically created domain name:

$ curl
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Very nice. In the next post, we will build upon this and generate TLS certificates for our Ingress resources with Let’s Encrypt.

Interested in Kubernetes? Sign up below and I’ll share more useful content on topics like this. I won’t email you more than once per week and will never share your email address.