Kuberetes unusual phenomenon: Old ReplicaSet and Old pods are still alive with New ReplicaSet and New pods when rolling upgrade

5/31/2017

When I use Kubernetes restful api to do rolling-upgrade, after upgrade, New ReplicaSet is created, New pods are running and get the desire replicas. But at the same time, Old ReplicaSet and Old pods were still running. Deployment's replicas are 3, When I check data in etcd, New ReplicaSet's replicas is 3, Old ReplicaSet's replicas is 2, I can't find any error information in controller manager log.

Another question: Why kubectl do not use restful api to do rolling-update? kubectl do rolling-upgrade in this way

My Code:

cli := rdc.k8sClients
dp, err := cli.Extensions().Deployments(namespace).Get(deployment)
if err != nil {
    return
}

curImage := dp.Spec.Template.Spec.Containers[0].Image

ds := new(extensions.DeploymentStrategy)
ds.Type = extensions.RollingUpdateDeploymentStrategyType
ds.RollingUpdate = new(extensions.RollingUpdateDeployment)
ds.RollingUpdate.MaxUnavailable = intstr.FromInt(int(ROLLING_MAXUNAVAILABLE))
ds.RollingUpdate.MaxSurge = intstr.FromInt(int(ROLLING_MAXSURGE))

_, err = cli.Extensions().Deployments(namespace).Update(dp)
-- litanhua
kubernetes

0 Answers