Couldn't find destination zone: No zone found for MYWEBSITEURLHERE.com: AccessDenied:

5/16/2017

I'm trying to deploy logstash on my kubernetes cluster. I'm using k8s v1.6.1 with calico as network.

The issue I'm having is the pod is spinning up but can't seem to register the DNS, I've stripped my domain name for security purposes:

route53-kubernetes-551223410-wf89w route53-kubernetes W0516 19:47:32.715753       1 service_listener.go:151] Couldn't find destination zone: No zone found for MYWEBSITEURLHERE.com: AccessDenied: User: arn:aws:sts::056146032236:assumed-role/nodes.k8s-uw2.MYWEBSITEURLHERE.com/i-01cac4656e7ee0c4e is not authorized to perform: route53:ListHostedZonesByName
route53-kubernetes-551223410-wf89w route53-kubernetes   status code: 403, request id: 809c62fa-3a70-11e7-bccf-9daca39d7850

Now the weird thing is, I've confirmed that the IAM creds have been setup correctly for that role:

{
    "RoleName": "nodes.k8s-uw2.MYWEBSITEURLHERE.com",
    "PolicyDocument": {
        "Statement": [
            {
                "Effect": "Allow",
                "Resource": [
                    "*"
                ],
                "Action": [
                    "ec2:Describe*"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "*"
                ],
                "Action": [
                    "elasticloadbalancing:DescribeLoadBalancers"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "*"
                ],
                "Action": [
                    "ecr:GetAuthorizationToken",
                    "ecr:BatchCheckLayerAvailability",
                    "ecr:GetDownloadUrlForLayer",
                    "ecr:GetRepositoryPolicy",
                    "ecr:DescribeRepositories",
                    "ecr:ListImages",
                    "ecr:BatchGetImage"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:route53:::hostedzone/Z1ILWH3JAW6GTW"
                ],
                "Action": [
                    "route53:ChangeResourceRecordSets",
                    "route53:ListResourceRecordSets",
                    "route53:GetHostedZone",
                    "route53:ListHostedZonesByName"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:route53:::change/*"
                ],
                "Action": [
                    "route53:GetChange"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "*"
                ],
                "Action": [
                    "route53:ListHostedZones"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::k8s-uw2-sightmachine-com-state-store/k8s-uw2.MYWEBSITEURLHERE.com",
                    "arn:aws:s3:::k8s-uw2-sightmachine-com-state-store/k8s-uw2.MYWEBSITEURLHERE.com/*"
                ],
                "Action": [
                    "s3:*"
                ]
            },
            {
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::k8s-uw2-sightmachine-com-state-store"
                ],
                "Action": [
                    "s3:GetBucketLocation",
                    "s3:ListBucket"
                ]
            }
        ],
        "Version": "2012-10-17"
    },
    "PolicyName": "nodes.k8s-uw2.MYWEBSITEURLHERE.com"
}

What else is odd is I am able to create my elasticsearch service as well as kibana and those both work fine. It's only my logstash service that isn't playing nice.

Here is my logstash service definition:

apiVersion: v1
kind: Service
metadata:
  name: logstash
  namespace: inf
  labels:
    app: logstash
    component: server
    role: monitoring
    dns: route53
  annotations:
    domainName: logstash.k8s-uw2.MYWEBSITEURLHERE.com
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  type: LoadBalancer
  selector:
    app: logstash
    component: server
    role: monitoring
  ports:
  - name: lumberjack
    port: 5043
    protocol: TCP
  - name: beats
    port: 5044
    protocol: TCP
  - name: http
    port: 31311
    protocol: TCP

And here is my elasticsearch service definition:

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: inf
  labels:
    app: elasticsearch
    component: client
    role: monitoring
    dns: route53
  annotations:
      domainName: elasticsearch.k8s-uw2.sightmachine.com
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  type: LoadBalancer
  selector:
    app: elasticsearch
    component: client
    role: monitoring
  ports:
  - name: http
    port: 9200
    protocol: TCP

I've also confirmed that the ZONE ID is indeed correct.

Any help would greatly be appreciated as much of this is abstracted from traditional setups and is harder for me to debug.

-- xamox
kubectl
kubernetes

1 Answer

5/19/2017

The solution to this was just to increase the allowed amount of access control, so instead of making that role granular like:

"arn:aws:route53:::hostedzone/Z1ILWH3JAW6GTW"
"arn:aws:route53:::change/*"

became:

*
-- xamox
Source: StackOverflow