In Kubernetes, do pods with "readiness checks" in a state of "Unhealthy" fail to resolve from other pods until they become ready?

4/8/2019

I've defined a dummy service as a means of registering my pods in DNS, as a cluster IP will not work for my application right now.

apiVersion: v1
kind: Service
metadata:
  name: company
spec:
  selector:
    app: company_application
  clusterIP: None

apiVersion: apps/v1
kind: Deployment
metadata:
  name: company-master-deployment
  labels:
    app: company_application
    role: master
spec:
  selector:
    matchLabels:
      app: company_application
      role: master
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: company_application
        role: master
    spec:
      hostname: master
      subdomain: company

I'm using the DNS entry for master.company.default.svc.cluster.local to connect to that pod from another pod.

I've noticed a really annoying behavior in Kubernetes under these conditions:

  • I have a pod that is in "unhealthy" as defined by a ReadinessCheck
  • I have another pod whose application wants to do a DNS lookup on that pod
  • The DNS lookup fails until the "unhealthy" pod becomes healthy.

Is this the way Kubernetes is supposed to work? Is there any way, other than removing the readiness check, to make sure that the DNS continues to resolve?

-- tacos_tacos_tacos
kubernetes

1 Answer

4/8/2019

Yes, Pods are not added to service endpoints till they pass readiness checks. You can confirm this by running following command:

kubectl get endpoints company -n <your_namespace>

You wont see any endpoints till

readinessProbe

is failing.

-- Prateek Jain
Source: StackOverflow