I've defined a dummy service as a means of registering my pods in DNS, as a cluster IP will not work for my application right now.
apiVersion: v1
kind: Service
metadata:
name: company
spec:
selector:
app: company_application
clusterIP: None
apiVersion: apps/v1
kind: Deployment
metadata:
name: company-master-deployment
labels:
app: company_application
role: master
spec:
selector:
matchLabels:
app: company_application
role: master
strategy:
type: Recreate
template:
metadata:
labels:
app: company_application
role: master
spec:
hostname: master
subdomain: company
I'm using the DNS entry for master.company.default.svc.cluster.local
to connect to that pod from another pod.
I've noticed a really annoying behavior in Kubernetes under these conditions:
Is this the way Kubernetes is supposed to work? Is there any way, other than removing the readiness check, to make sure that the DNS continues to resolve?
Yes, Pods are not added to service endpoints till they pass readiness checks. You can confirm this by running following command:
kubectl get endpoints company -n <your_namespace>
You wont see any endpoints till
readinessProbe
is failing.