`kubectl logs counter` not showing any output following official Kubernetes example

11/20/2018

I am not able to see any log output when deploying a very simple Pod:

myconfig.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

then

kubectl apply -f myconfig.yaml

This was taken from this official tutorial: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes

The pod appears to be running fine:

kubectl describe pod counter
Name:         counter
Namespace:    default
Node:         ip-10-0-0-43.ec2.internal/10.0.0.43
Start Time:   Tue, 20 Nov 2018 12:05:07 -0500
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
Status:       Running
IP:           10.0.0.81
Containers:
  count:
    Container ID:  docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
    State:          Running
      Started:      Tue, 20 Nov 2018 12:05:08 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-r6tr6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r6tr6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                Message
  ----    ------                 ----  ----                                -------
  Normal  Scheduled              16m   default-scheduler                   Successfully assigned counter to ip-10-0-0-43.ec2.internal
  Normal  SuccessfulMountVolume  16m   kubelet, ip-10-0-0-43.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-r6tr6"
  Normal  Pulling                16m   kubelet, ip-10-0-0-43.ec2.internal  pulling image "busybox"
  Normal  Pulled                 16m   kubelet, ip-10-0-0-43.ec2.internal  Successfully pulled image "busybox"
  Normal  Created                16m   kubelet, ip-10-0-0-43.ec2.internal  Created container
  Normal  Started                16m   kubelet, ip-10-0-0-43.ec2.internal  Started container

Nothing appears when running:

kubectl logs counter --follow=true
-- seenickcode
amazon-eks
kubernetes

5 Answers

11/21/2018

The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.

ss -tnpl |grep 10250
LISTEN     0      128         :::10250                   :::*                   users:(("kubelet",pid=1102,fd=21))

Check the above command and see if pid changes continuously within some interval.

Also, check the /var/log/messages if there is any node related issue. Hope this helps.

-- Prafull Ladha
Source: StackOverflow

11/20/2018

Use this:

$ kubectl logs -f counter --namespace default
-- Shudipta Sharma
Source: StackOverflow

1/26/2019

I followed Seenickode's comment and i got it working.

I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.

Here is what i learned:

  1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.
  2. Allowed port 443 Egress from Control Plane to Worker Nodes.

Then the kubectl logs started to work.

Sample Cloudformation template updates here:

  NodeSecurityGroupFromControlPlaneIngress:
    Type: AWS::EC2::SecurityGroupIngress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
      GroupId: !Ref NodeSecurityGroup
      SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
      IpProtocol: tcp
      FromPort: 1025
      ToPort: 65535

Also

  ControlPlaneEgressToNodeSecurityGroupOn443:
    Type: AWS::EC2::SecurityGroupEgress
    DependsOn: NodeSecurityGroup
    Properties:
      Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
      GroupId:
        Ref: ControlPlaneSecurityGroup
      DestinationSecurityGroupId:
        Ref: NodeSecurityGroup
      IpProtocol: tcp
      FromPort: 443
      ToPort: 443
-- vpack
Source: StackOverflow

11/20/2018

The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:

{
  "log-driver": "anything-but-json-file",
}

That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):

$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>
-- Rico
Source: StackOverflow

11/21/2018

I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.

-- seenickcode
Source: StackOverflow