Deployment scales a different deployment

5/28/2018

We are running Kubernetes version 1.10.2 on GKE (Google Kubernetes Engine). We currently have two deployments that have the same tags, which is used as the selectors for a single service. When we run a kubectl get deploy, we get the following:

+--------------+---------+---------+------------+-----------+
| NAME         | DESIRED | CURRENT | UP-TO-DATE | AVAILABLE |
+--------------+---------+---------+------------+-----------+
| DEPLOYMENT-A | 3       | 3       | 3          | 3         |
+--------------+---------+---------+------------+-----------+
| DEPLOYMENT-B | 5       | 5       | 5          | 5         |
+--------------+---------+---------+------------+-----------+

However, if I look at the pods that are deployed, there are only 5, and all of them are from DEPLOYMENT-B. If I run kubectl scale deploy DEPLOYMENT-A --replicas=10, it will scale DEPLOYMENT-B to 10, and there will still be 0 pods from DEPLOYMENT-A, even though kubectl get deploy still says that there are 3 available.

Looking to understand next steps for troubleshooting, or if anyone has experienced anything like this before. I've been searching around, and have had no luck finding anything (could just be me being terrible and phrasing the issue). I have a theory that if I scale DEPLOYMENT-B to 0, then DEPLOYMENT-A should start scheduling 3 pods, but I'm not sure enough to try it and risking an outage on a guess.

Thanks!

-- Murcurio
google-kubernetes-engine
kubernetes

1 Answer

6/25/2018

Looks like your spec.selector has been wrongly configured, so deployment-b try to control all deployment-a's resources. You should probably recheck deployment A and B 's selector and labels, make sure there are no duplicated labels between them. After that you redeploy both deployments again because increasing replicas would still use the existing configuration, thus will not fix the issue.

-- Patrick W
Source: StackOverflow