Istio Bookinfo k8 deployment

4/21/2021

I have one master and two worker nodes (worker-1 and worker-2). All the Nodes are up and running without any issue. when i was planned to installed istio service mesh i tried to deploy sample book info deployment.

After deploying bookinfo i verified pod status running below command

root@master:~# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
details-v1-79c697d759-9k98l       2/2     Running   0          11h   10.200.226.104   worker-1   <none>           <none>
productpage-v1-65576bb7bf-zsf6f   2/2     Running   0          11h   10.200.226.107   worker-1   <none>           <none>
ratings-v1-7d99676f7f-zxrtq       2/2     Running   0          11h   10.200.226.105   worker-1   <none>           <none>
reviews-v1-987d495c-hsnmc         1/2     Running   0          21m   10.200.133.194   worker-2   <none>           <none>
reviews-v2-6c5bf657cf-jmbkr       1/2     Running   0          11h   10.200.133.252   worker-2   <none>           <none>
reviews-v3-5f7b9f4f77-g2s6p       2/2     Running   0          11h   10.200.226.106   worker-1   <none>           <none>

I have noticed that two pod are not running here status shows 1/2 (which is in worker-2 node), almost i spent two days but not able to find anything to fix the above issue. here the describe pod status

Warning  Unhealthy  63s (x14 over 89s)  kubelet            Readiness probe failed: Get "http://10.244.133.194:15021/healthz/ready": 
dial tcp 10.200.133.194:15021: connect: connection refused

Then today morning i realized something issue with worker-2 node when the pod is not running with status of 1/2, i planned cordon node like below

kubectl cordon worker-2
kubectl delete pod <worker-2 pod>
kubectl get pod -o wide

After cordon worker-2 node i could see all the pod are up with status of 2/2 in worker-1 node without any issue.

root@master:~# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
details-v1-79c697d759-9k98l       2/2     Running   0          11h   10.200.226.104   worker-1   <none>           <none>
productpage-v1-65576bb7bf-zsf6f   2/2     Running   0          11h   10.200.226.107   worker-1   <none>           <none>
ratings-v1-7d99676f7f-zxrtq       2/2     Running   0          11h   10.200.226.105   worker-1   <none>           <none>
reviews-v1-987d495c-2n4d9         2/2     Running   0          17s   10.200.226.113   worker-1   <none>           <none>
reviews-v2-6c5bf657cf-wzqpt       2/2     Running   0          17s   10.200.226.112   worker-1   <none>           <none>
reviews-v3-5f7b9f4f77-g2s6p       2/2     Running   0          11h   10.200.226.106   worker-1   <none>           <none>

could you please someone help me how to fix this issue to schedule (pending pods) pods in worker-2 node as well.

Note: when i am trying to re-deploy all the nodes (worker-1 and worker-2) again pod status going back to 1/2 status

oot@master:~/istio-1.9.1/samples# kubectl logs -f ratings-v1-b6994bb9-wfckn -c istio-proxy
ates: 0 successful, 0 rejected
2021-04-21T07:12:19.941679Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-21T07:12:21.942096Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
-- Gowmi
kubernetes

0 Answers