Kubernetes pod cluster ip not responding?

12/2/2019

I have two backend services deployed on the Google cloud Kubernetes Engine.

a) Backend Service

b) Admin portal which needed to connect with the Backend Service

Everything is available in one cluster.

As in Workload / Pods,

I have three deployments running whereas fitme:9000 is a backend and nginx-1:9000 is an admin portal service enter image description here

whereas in Services I have enter image description here

Visualization

enter image description here

Explanation

1. D1 (fitme), D2 (mongo-mongodb), D3 (nginx-1) are three deployments

2. E1D1 (fitme-service), E2D1 (fitme-jr29g), E1D2 (mongo-mongodb), E2D2 (mongo-mongodb-rcwwc) and E1D3 (nginx-1-service) are Services

3. `E1D1, E1D2 and E1D3` are exposed over `Load Balancer` whereas `E2D1 , E2D2` are exposed over `Cluster IP`.

The reason behind it:

D1 needs to access D2 (internally) -> This is perfectly working fine. I am using E2D2 exposed service (cluster IP) to access the D2 deployment inside from D1

Now, D3 needs access to D1 deployment. So, I exposed D1 as an E2D1 service and trying to access it internally by generated Cluster IP of E2D1 but it's giving me request time out.

YAML for fitme-jr29g service

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T11:18:55Z"
  generateName: fitme-
  labels:
    app: fitme
  name: fitme-jr29g
  namespace: default
  resourceVersion: "486673"
  selfLink: /api/v1/namespaces/default/services/fitme-8t7rl
  uid: 875045eb-14f5-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.240.95
  ports:
  - port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: fitme
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

YAML for nginx-1-service service

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T11:30:10Z"
  labels:
    app: admin
  name: nginx-1-service
  namespace: default
  resourceVersion: "489972"
  selfLink: /api/v1/namespaces/default/services/admin-service
  uid: 195b462e-14f7-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.250.90
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30628
    port: 8080
    protocol: TCP
    targetPort: 9000
  selector:
    app: admin
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.227.26.101

YAML for nginx-1 deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-12-02T11:24:09Z"
  generation: 2
  labels:
    app: admin
  name: admin
  namespace: default
  resourceVersion: "489624"
  selfLink: /apis/apps/v1/namespaces/default/deployments/admin
  uid: 426792e6-14f6-11ea-823c-42010a8e0047
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: admin
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: admin
    spec:
      containers:
      - image: gcr.io/docker-226818/admin@sha256:602fe6b7e43d53251eebe2f29968bebbd756336c809cb1cd43787027537a5c8b
        imagePullPolicy: IfNotPresent
        name: admin-sha256
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-12-02T11:24:18Z"
    lastUpdateTime: "2019-12-02T11:24:18Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-02T11:24:09Z"
    lastUpdateTime: "2019-12-02T11:24:18Z"
    message: ReplicaSet "admin-8d55dfbb6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

YAML for fitme-service

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-02T13:38:21Z"
  generateName: fitme-
  labels:
    app: fitme
  name: fitme-service
  namespace: default
  resourceVersion: "525173"
  selfLink: /api/v1/namespaces/default/services/drogo-mzcgr
  uid: 01e8fc39-1509-11ea-823c-42010a8e0047
spec:
  clusterIP: 10.35.240.74
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31016
    port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    app: fitme
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.236.110.230

YAML for fitme deployment

 apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-12-02T13:34:54Z"
  generation: 2
  labels:
    app: fitme
  name: fitme
  namespace: default
  resourceVersion: "525571"
  selfLink: /apis/apps/v1/namespaces/default/deployments/drogo
  uid: 865a5a8a-1508-11ea-823c-42010a8e0047
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: drogo
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: fitme
    spec:
      containers:
      - image: gcr.io/fitme-226818/drogo@sha256:ab49a4b12e7a14f9428a5720bbfd1808eb9667855cb874e973c386a4e9b59d40
        imagePullPolicy: IfNotPresent
        name: fitme-sha256
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-12-02T13:34:57Z"
    lastUpdateTime: "2019-12-02T13:34:57Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-12-02T13:34:54Z"
    lastUpdateTime: "2019-12-02T13:34:57Z"
    message: ReplicaSet "drogo-5c7f449668" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

I am accessing fitme-jr29g by putting 10.35.240.95:9000 ip address in
nginx-1 deployment container.

-- Amit Pal
google-cloud-platform
google-kubernetes-engine
kubernetes
load-balancing

1 Answer

12/2/2019

The deployment object can, and often, should have network properties to expose the applications within the pods.

Pods are networking cappable objects, with virtual ethernet interfaces, needed to receive incoming traffic.

On the other hand, services are completely network oriented objects, meant mostly to relay network traffic into the pods.

You can think of that as pods (grouped in deployments) as backend and services as load balancers. At the end, both need network capabilities.

In your scenario, I'm not sure how are you exposing your deployment via load balancer since its pods doesn't seem to have any open ports.

Since the services exposing your pods are targeting port 9000, you can add it to the pod template in your deployment:

spec:
  containers:
  - image: gcr.io/fitme-xxxxxxx
    name: fitme-sha256
    ports:
    - containerPort: 9000

Be sure that it matches the port where your container is actually receiving the incoming requests.

-- yyyyahir
Source: StackOverflow