I have statefulset pods. When I go inside one of the pods and try to ping the hostname of it. It works. But if I try to ping the hostnames of other pods from current containers, then those hostnames do not get resolved. I have headless service also in place. Could someone please tell me bare minimum what needs to be done at cluster level or in yaml of either service or statefulset to make the communication happen. An working example or some working charts link to try out, would be helpful. I can go through that.
Service:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}"
labels:
app: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}"
{{ include "metadata.labels.standard" . | indent 4 }}
spec:
clusterIP: None
selector:
tier: backend
StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-myapp1"
labels:
app: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-myapp1"
tier: backend
spec:
replicas: 2
selector:
matchLabels:
app: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-myapp1"
serviceName: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}"
template:
metadata:
labels:
app: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-myapp1"
tier: backend
volumes:
- name: configmap-r
configMap:
name: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-configmap"
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
containers:
- name: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-myapp1"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: configmap-r
mountPath: /home/xyz/
Pods:
pod/calico-kube-controllers-59fc8847c-vv9bt 1/1 Running 0 3h27m
pod/calico-node-4gktj 1/1 Running 0 3h27m
pod/coredns-5c98db65d4-tctgk 1/1 Running 13 63d
pod/coredns-5c98db65d4-v8gtv 1/1 Running 13 63d
pod/etcd-minikube 1/1 Running 2 63d
pod/kube-addon-manager-minikube 1/1 Running 2 63d
pod/kube-apiserver-minikube 1/1 Running 0 15d
pod/kube-controller-manager-minikube 1/1 Running 6 63d
pod/kube-proxy-qc9nx 1/1 Running 1 63d
pod/kube-scheduler-minikube 1/1 Running 6 63d
pod/storage-provisioner 1/1 Running 3 63d
pod/tiller-deploy-6b9c575bfc-z7dgs 1/1 Running 1 62d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP xx.xx.xx.xx <none> 53/UDP,53/TCP,9153/TCP 63d
service/tiller-deploy ClusterIP xx.xx.xx.xx <none> 44134/TCP 62d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 1 1 1 1 1 beta.kubernetes.io/os=linux 3h27m
daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/os=linux 63d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 3h27m
deployment.apps/coredns 2/2 2 2 63d
deployment.apps/tiller-deploy 1/1 1 1 62d
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-59fc8847c 1 1 1 3h27m
replicaset.apps/coredns-5c98db65d4 2 2 2 63d
replicaset.apps/tiller-deploy-6b9c575bfc 1 1 1 62d
`
To expose service within cluster use the type value as clusterIP. Instead of using spec: clusterIP: None
Use this: Type: ClusterIP