External-DNS EKS AWS

7/15/2019

[AWS EKS 1.13]

I am trying to setup external-dns as described here :

https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

I want to setup it in a namespace, here is the code:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
  namespace: qa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: qa
---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: external-dns
  namespace: qa
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.opensource.zalan.do/teapot/external-dns:latest
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=xxxxxx.domain.com
        - --provider=aws
        - --policy=sync
        - --aws-zone-type=public
        - --registry=txt
        - --txt-owner-id=xxxxxxx

Unfortunately this doesn't work, the status of the pod is "CrashLoopBackOff"

Here are logs of the pod :

time="2019-07-15T21:07:22Z" level=info msg="config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGatewayServices:[istio-system/istio-ingressgateway] Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[xxxx] ExcludeDomains:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType:public AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false CloudflareZonesPerPage:50 RcodezeroTXTEncrypt:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:ZTZ2FLS733BGN TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false NS1Endpoint: NS1IgnoreSSL:false TransIPAccountName: TransIPPrivateKeyFile:}"
time="2019-07-15T21:07:22Z" level=fatal msg="invalid configuration: no configuration has been provided"

However, if I deploy the exact same code in the default namespace that works without any issue.

Any help please ?

Thanks

-- Ahmed-F
amazon-eks
external-dns
kubernetes

1 Answer

7/29/2019

The invalid configuration: no configuration has been provided bit comes from trying to construct the Kube Client config without explicit configuration. If no explicit configs are provided, it tries to guess by using the default API server location from within the cluster. If the guess fails, this error message is displayed.

This default configuration can fail if:

  1. You're using a non-standard configuration (different apiserver URL?)
  2. There's a network issue between the Pod and the API Server
  3. RBAC is improperly configured

Assuming you've added a ServiceAccount, ClusterRole, ClusterRoleBinding, etc, this looks like the Terraform provider has failed to mount the ServiceAccount secret.

For now, it looks like you'll have to mount the secret manually (see link for more info):

resource "kubernetes_service_account" "foo" {
    name = "foo"
}
resource "kubernetes_deployment" "foo" {
    ...
    spec {
        ...
        template {
            ...
            spec {
                # Normally, this is what you should do:
                #service_account_name = "${kubernetes_service_account.foo.name}"

                volume {
                    name = "${kubernetes_service_account.foo.default_secret_name}"
                    secret {
                        secret_name = "${kubernetes_service_account.foo.default_secret_name}"
                    }
                }
                ...
                container {
                    ...
                    volume_mount {
                        name       = "${kubernetes_service_account.foo.default_secret_name}"
                        mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
                        read_only  = true
                    }
                }
            }
        }
    }
}
-- Curtis Mattoon
Source: StackOverflow