I have an app deployed on GKE along with istio
.
My pod has the following issue (kubectl describe pod
)
Warning Unhealthy 3m30s (x750 over 28m) kubelet, gke-nodepool Readiness probe failed: HTTP probe failed with statuscode: 503
Now in the relevant describe
sections:
Containers:
faros:
Port: 20000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 02 Sep 2019 19:18:10 +0300
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 02 Sep 2019 19:08:03 +0300
Finished: Mon, 02 Sep 2019 19:17:39 +0300
Ready: True
Restart Count: 3
istio-proxy:
Container ID: docker://f6701350fb6f1cd3f6823d6c8cfa4ada57139bacadef7cff2e6103f4b36b2b4b
State: Running
Started: Mon, 02 Sep 2019 18:54:15 +0300
Ready: False
The logs of faros
(my app):
➢ k logs faros-657c68f74-lxslt -c faros
[ENTRYPOINT] Initializing entrypoint.
[ENTRYPOINT] Checking for non-privileged user.
[ENTRYPOINT] Running command.
* Running on http://0.0.0.0:20000/ (Press CTRL+C to quit)
127.0.0.1 - - [02/Sep/2019 16:18:15] "GET /health HTTP/1.1" 200 -
127.0.0.1 - - [02/Sep/2019 16:18:16] "GET /ready HTTP/1.1" 200 -
The logs of istio
* failed checking application ports. listeners="0.0.0.0:15090","10.8.63.194:31400","10.8.63.194:15443","10.8.49.0:53","10.8.49.250:8080","10.8.63.194:443","10.8.54.34:443","10.8.48.10:53","10.8.63.194:15032","10.8.63.194:15029","10.8.63.194:15030","10.8.63.194:15031","10.8.48.1:443","10.8.48.180:80","10.8.63.194:15020","10.8.58.47:15011","10.8.54.249:42422","10.8.60.185:11211","10.8.51.133:443","10.8.61.194:443","10.8.58.10:44134","10.8.48.44:443","0.0.0.0:9901","0.0.0.0:15004","0.0.0.0:20001","0.0.0.0:15010","0.0.0.0:9411","0.0.0.0:9091","0.0.0.0:15014","0.0.0.0:8080","0.0.0.0:9090","0.0.0.0:7979","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:3030","0.0.0.0:80","10.8.33.250:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 20000
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
faros-657c68f74-lxslt 1/2 Running 4 37m 10.8.33.250 gke-nodepool1- <none> <none>
Which one of the containers is actually having the issue?
edit: Here is the output of get deploy faros -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
flux.weave.works/antecedent: mynamespace:helmrelease/faros
creationTimestamp: "2019-09-02T16:51:45Z"
generation: 2
labels:
app: faros
chart: faros-0.0.3
heritage: Tiller
release: faros
version: blue
name: faros
namespace: mynamespace
resourceVersion: "14221480"
selfLink: /apis/extensions/v1beta1/namespaces/mynamespace/deployments/faros
uid: f28f4688-cda1-11e9-a87a-42010a790808
spec:
minReadySeconds: 5
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: faros
process: web
release: faros
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0%
type: RollingUpdate
template:
metadata:
annotations:
checksum/configmap: b6311a3639d77e8eb746f0c92fe71c43
checksum/sealedsecrets: e3b0c44298fc1c149afbf4c8996fb92427a
checksum/secrets: e3b0c44298fc1c149afbf4c8996fb92
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
readiness.status.sidecar.istio.io/applicationPorts: "5000"
sidecar.istio.io/inject: "true"
creationTimestamp: null
labels:
app: faros
process: web
release: faros
version: blue
spec:
containers:
- env:
- name: PROCESS_TYPE
value: web
envFrom:
- configMapRef:
name: faros
- secretRef:
name: faros
image: gcr.io/myregistry/myimage:tag
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
name: faros
ports:
- containerPort: 5000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: http
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 150m
memory: 128Mi
requests:
cpu: 150m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2019-09-02T16:51:45Z"
lastUpdateTime: "2019-09-02T16:51:45Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1