I am testing Kubernetes redundancy features with a testbed made of one master and three minions.
Case: I am running a service with 3 replicas on minions 1 and 2 and minion3 stopped
[root@centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 NotReady 14d
centos-minion1 Ready 14d
centos-minion2 Ready 14d
[root@centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Test: After starting minion3 and stopping minion2 (on which 2 pods are running)
[root@centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 Ready 15d
centos-minion1 Ready 14d
centos-minion2 NotReady 14d
Result: The service kind doesn't recover from minion failure and Kubernetes continue showing pods on the failed minion.
[root@centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Expected result (at least in my understanding): the service should have been built on the currently available minion 1 and 3
As far as I understand, the role of service kind is to make the deployment "globally" available so we can refer to them independently of where deployments are in the cluster.
Am I doing something wrong?
I'm using the follwoing yaml spec:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-www
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
It looks like you're always trying to read the same pods that are referenced in $MYPODS
. Pod names are created dynamically by the ReplicationController, so instead of kubectl describe pods $MYPODS
try this instead:
kubectl get pods -l app=nginx -o wide
This will always give you the currently scheduled pods for your app.