Is it possible to get the IP address a pod had prior to crashing?

5/6/2021

After a pod crashes and restarts, is it possible to retrieve the IP address of the pod prior to the crash?

-- Chris Gonzalez
kubernetes

1 Answer

5/7/2021

This is a very wide question as not possible to identify where exactly and why pod crashes. However I'll show what's possible to do with in different scenarios.

  • Pod's container crashes and then restarts itself:

In this case pod will save its IP address. The easiest way is to run kubectl get pods -o wide

Output will be like

NAME                                        READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
nginx-deployment-699f9b54d-g2w97            0/1     CrashLoopBackOff   2          55s   10.244.1.53   worker1   <none>           <none>

As you can see even if container crashes, pod has assigned with IP address

Also it's possible to add initContainers and add a command which will get the IP address of the pod (depending on the image you will use, there are different options like ip a, ifconfig -a etc.

Here's a simple example how it can be added:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      initContainers: # here is where to add initContainer section
      - name: init-container
        image: busybox
        args: [/bin/sh, -c, "echo IP ADDRESS ; ifconfig -a"]
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        command: ["/sh", "-c", "nginx --version"] #this is to make nginx image failing

Before your main container starts, this init-container will run an ifconfig -a command and will put its results into logs.

You can check it with:

kubectl logs %pod_name% -c init-container

Output will be:

IP ADDRESS
eth0 Link encap:Ethernet HWaddr F6:CB:AD:D0:7E:7E inet addr:10.244.1.52 Bcast:10.244.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:398 (398.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Also you can check logs for previous running version of pod by adding --previous to the command above.

  • Pod crashes and then is recreated In this case new pod is created which means local logs are gone. You will need to think about saving them separately from pods. For this matter you can use volumes. E.g. hostPath will store logs on the node where pod runs or nfs can be attached to different pods and be accessed.
  • Control plane crashed while pods are still running You can't access logs using control-plane and kubectl however your containers will still be running on the nodes. To get logs directly from nodes where your containers are running use docker or crictl depending on your runtime.

Ideal solution for such cases is to use monitoring systems such as prometheus or elasticseach. It will require additional set up of fluentd or kube-state-metrics

-- moonkotte
Source: StackOverflow