Here is my deploment template:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
name: XXX
spec:
replicas: 1
revisionHistoryLimit : 0
strategy:
type : "RollingUpdate"
rollingUpdate:
maxUnavailable : 0%
maxSurge : 100%
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
template:
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
spec:
containers:
- image: docker-registry:{{ xxx-version }}
name: XXX
ports:
- name: XXX
containerPort: 9000
The key section in the documentation that's relevant to this issues is:
Existing Replica Set controlling Pods whose labels match
.spec.selector
but whose template does not match.spec.template
are scaled down. Eventually, the new Replica Set will be scaled to.spec.replicas
and all old Replica Sets will be scaled to 0.
http://kubernetes.io/docs/user-guide/deployments/
So the spec.selector should not vary across multiple deployments:
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
should become:
selector:
matchLabels:
name: XXX
The rest of the labels can remain the same