EKS: Unable to pull logs from pods

10/28/2018

kubectl logs command intermittently fails with "getsockopt: no route to host" error.

# kubectl logs -f mypod-5c46d5c75d-2Cbtj

Error from server: Get https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true: dial tcp X.X.X.X:10250: getsockopt: no route to host

If I run the same command 5-6 times it works. I am not sure why this is happening. Any help will be really appreciated.

-- manish
amazon-eks
kubectl
kubernetes

4 Answers

12/6/2018

I had a chance to talk with AWS EKS Engineer in person. The official answer is that current EKS doesn't support 172.17.0.0/16 due to cidr overlapping with Docker's IP.It seems they have internal ticket to fix the issue, but no ETA.

-- SnoU
Source: StackOverflow

11/6/2018

I have exactly same issue with private ip 172.17.X.X

Error from server: Get https://172.17.X.X:10250/containerLogs/******: dial tcp 
172.17.X.X:10250: getsockopt: no route to host

I am using EKS-Optimized AMI v24.

Similar issue is discussed in here. https://github.com/aws/amazon-vpc-cni-k8s/issues/137. I wonder private ip starts with 172.17.X.X is the issue as it collides with Docker's default internal cidr, but I didn't have this issue when I was using kops.

-- SnoU
Source: StackOverflow

2/26/2019

Depending on the AMI, I get the error "getsockopt: no route to host".

I use "kubectl logs my-pod-id" to access the pod's logs.

  • I am running EKS V1.10, in AWS (yes I need to upgrade to V1.11 soon).
  • I am using an IP range 10.0.0.0 for my vpc and subnets. And I have 2 public and 2 private subnets.

It works (and also does not work), with the EXACT same routing, security groups, vpc, etc. Just the AMI change.

Works: ami-73a6e20b (Used when I first setup my cluster back in Oct 2018)

Does not work: ami-0e7ee8863c8536cce (and is the recommended Amazon EKS-optimized AMI as of today for us-west-2 Oregon - https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html)

My point is, it may not be your routing/security-group setup.

-- Sagan
Source: StackOverflow

11/12/2018

Just fyi, I just tried using another VPC 172.18.X.X for EKS, and all kubectl commands works fine.

Also I noticed that kops uses 172.18.X.X for docker's internal cidr when I was using 172.17.X.X VPC. So I speculate that kops changes default docker's cidr not to collide with cluster IP. I hope we could configure docker's cidr when EKS worker nodes are created, maybe by CloudFormation yaml template or something.

-- SnoU
Source: StackOverflow