I am using this tutorial ingress on GCE. The tutorial works fine with docker image in it but with my docker image, I always get UNHEALTHY state of backend service. I added liveness and readiness TCP probes as my application does not respond to '/' with 200. The deployment yaml look like below
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: neg-demo-app # Label for the Deployment
name: neg-demo-app # Name of Deployment
spec: # Deployment's specification
selector:
matchLabels:
run: neg-demo-app
template: # Pod template
metadata:
labels:
run: neg-demo-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: mohib13/graphene_with_coref:first # Application to run in Deployment's Pods
name: hostname # Container name
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
ports:
- containerPort: 8080
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
The service yaml is like below
apiVersion: v1
kind: Service
metadata:
name: neg-demo-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates an NEG after an Ingress is created
spec: # Service's specification
type: NodePort
selector:
run: neg-demo-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 8080 # Service's port
targetPort: 8080
while ingress is below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neg-demo-ing
spec:
backend:
serviceName: neg-demo-svc # Name of the Service targeted by the Ingress
servicePort: 8080 # Should match the port used by the Service
Update : Describe pod looks like below
Name: neg-demo-app-564654b4d4-gcmz2
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-neg-demo-cluster-default-pool-a0128d7b-z63j/10.0.0.4
Start Time: Wed, 07 Aug 2019 16:21:20 +0100
Labels: pod-template-hash=564654b4d4
run=neg-demo-app
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container hostname
Status: Running
IP: 10.52.2.5
Controlled By: ReplicaSet/neg-demo-app-564654b4d4
Containers:
hostname:
Container ID: docker://71b1fcada1775844710dca0aa4252a3232e3595f3605de1ae54ab70ec40d823b
Image: mohib13/graphene_with_coref:first
Image ID: docker-pullable://mohib13/graphene_with_coref@sha256:39ff7b18a88020c4e090e37080560df58f382d8bf320f9051559824a7d0e80ad
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 07 Aug 2019 16:21:21 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: tcp-socket :8080 delay=15s timeout=1s period=20s #success=1 #failure=3
Readiness: tcp-socket :8080 delay=5s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kghdh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kghdh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kghdh
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m28s default-scheduler Successfully assigned default/neg-demo-app-564654b4d4-gcmz2 to gke-neg-demo-cluster-default-pool-a0128d7b-z63j
Normal Pulled 7m27s kubelet, gke-neg-demo-cluster-default-pool-a0128d7b-z63j Container image "mohib13/graphene_with_coref:first" already present on machine
Normal Created 7m27s kubelet, gke-neg-demo-cluster-default-pool-a0128d7b-z63j Created container
Normal Started 7m27s kubelet, gke-neg-demo-cluster-default-pool-a0128d7b-z63j Started container