I can not understand the behavior of a Deployment. It always uses the wrong version of ReplicaSets in my case. First i did
kubectl create -f [filename]
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: kube-state-metrics
name: kube-state-metrics
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: kube-state-metrics
strategy:
type: Recreate
template:
metadata:
labels:
app: kube-state-metrics
But а pod could not start because the master node has a taint. I changed the deployment file and added a toleration:
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Equal
effect: NoSchedule
kubectl replace -f [filename]
I got two revisions, but NewReplicaSet was set to old revision and OldReplicaSet to modified version respectively. Hmmm...
I deleted the deployment and called "create" again. The situation did not improve.
OldReplicaSets: <none>
NewReplicaSet: kube-state-metrics-59b7dccd55 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set kube-state-metrics-69c88bb67b to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set kube-state-metrics-69c88bb67b to 0
Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set kube-state-metrics-59b7dccd55 to 1
OldReplicaSets is none, but actually is used. And again NewReplicaSets is wrong. In addition, the deployment has two revisions:
REVISION CHANGE-CAUSE
1 kubectl create --filename=manifests
2 kubectl create --filename=manifests
How to fix it?