Istio-init container has gone to crash loopbackoff

10/5/2020

I am istio-init(version 1.6.5) as a sidecar container in my k8s cluster, it been working fine for a while. Today, out of blue my pod has gone to Init:CrashloopbackOff

NAME                                 READY       STATUS                     RESTARTS       AGE 
healthscore-green-79c9c5c764-cndm6    0/2       Init:CrashLoopBackOff          388        2d17h

on kubectl describe it shows:

istio-init: 
  Container ID: docker://657be7ddd9058406da7768596c81490b426a376816b8b4f20fbb63c0c44b5a13 
  Image: docker.io/istio/proxyv2:1.6.5 
  Image ID: docker-pullable://istio/proxyv2@sha256:ec2df06d76e8845fbce0ac1b4b85ab06a7beabab8a69fcc3bb2b573378b71c47
  Port: Host 
  Port: Args: istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x -b * -d 15090,15021,15020      
  State: Waiting 
  Reason: CrashLoopBackOff 
  Last State: Terminated Reason: 
  Error Exit Code: 2 
  Started: Mon, 28 Sep 2020 14:21:23 +0530 
  Finished: Mon, 28 Sep 2020 14:21:23 +0530
  Ready: False 
  Restart Count: 392 
  Limits:
    cpu: 100m 
    memory: 50Mi
  Requests: 
     cpu: 10m
     memory: 10Mi

and kubectl logs of the istio-container prints the following stack trace: https://pastebin.com/GAuNndd5

Also one thing I noticed, since istio-init side car is failing according to kubernetes, but my application is able to serve http requests. It get fixed automatically after some time. but reoccurs once in a while.

-- Ajay Pratap
amazon-eks
istio
kubernetes

1 Answer

10/5/2020

From the logs it's failing to executeIptablesRestoreCommand. The kubernetes nodes might have gone through some reboot or upgrade .Disable the SELinux temporarily by doing sudo setenforce 0 on the kubernetes node and then re-run your istioctl kube-inject

-- Arghya Sadhu
Source: StackOverflow