kubernetes pod exit automaticlly when scale replicate set

5/15/2020

When I scale the replicate sets in kubernetes(v1.15.2) dashboard, the app runs like this:

  1. starting pod
  2. exits pod automaticlly and the replicate set number turns to 0

now I could not get the log and find what cause this problem. what should I do to detect the error detail message? what should I do to solve this problem? Finally the UI has no pod:

enter image description here

and this is my replicate set config:

{
  "kind": "ReplicaSet",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "apollo-mysql-565ccb75bc",
    "namespace": "dabai-fat",
    "selfLink": "/apis/extensions/v1beta1/namespaces/dabai-fat/replicasets/apollo-mysql-565ccb75bc",
    "uid": "d883f690-4d8d-45d4-a556-965d38204900",
    "resourceVersion": "28630666",
    "generation": 32,
    "creationTimestamp": "2020-05-15T12:40:26Z",
    "labels": {
      "app": "apollo-mysql",
      "pod-template-hash": "565ccb75bc"
    },
    "annotations": {
      "deployment.kubernetes.io/desired-replicas": "2",
      "deployment.kubernetes.io/max-replicas": "2",
      "deployment.kubernetes.io/revision": "1"
    },
    "ownerReferences": [
      {
        "apiVersion": "apps/v1",
        "kind": "Deployment",
        "name": "apollo-mysql",
        "uid": "703141cd-d8b9-4554-ac4d-d6fdabb7d0e9",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "replicas": 0,
    "selector": {
      "matchLabels": {
        "app": "apollo-mysql",
        "pod-template-hash": "565ccb75bc"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "app": "apollo-mysql",
          "pod-template-hash": "565ccb75bc"
        },
        "annotations": {
          "kubectl.kubernetes.io/restartedAt": "2020-04-18T18:30:58+08:00"
        }
      },
      "spec": {
        "volumes": [
          {
            "name": "mysql-persistent-storage",
            "persistentVolumeClaim": {
              "claimName": "apollo-mysql-pv-claim"
            }
          }
        ],
        "containers": [
          {
            "name": "mysql",
            "image": "mysql:5.7",
            "ports": [
              {
                "name": "mysql",
                "containerPort": 3306,
                "protocol": "TCP"
              }
            ],
            "env": [
              {
                "name": "MYSQL_ROOT_PASSWORD",
                "value": "gl4LucnXwLeLwAd29QqJn4"
              }
            ],
            "resources": {},
            "volumeMounts": [
              {
                "name": "mysql-persistent-storage",
                "mountPath": "/var/lib/mysql"
              }
            ],
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "IfNotPresent"
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "nodeSelector": {
          "app-type": "assistant-app"
        },
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    }
  },
  "status": {
    "replicas": 0,
    "observedGeneration": 32
  }
}
-- Dolphin
kubernetes

1 Answer

5/15/2020

You probably don't want to directly interact with ReplicaSets. This ReplicaSet is managed by a Deployment (see the ownerReferences in your output) and you should directly work with that Deployment. From the CLI, for example, you could

kubectl scale deployment apollo-mysql --replicas=2

In particular, if you update properties in a Deployment, the way a rolling update works is by creating a new ReplicaSet with the new configuration, and changing the replica counts of the old and new ReplicaSets. If you look at your ReplicaSets you will generally see several of them with similar names, attached to the same Deployment, but in the steady state only one of them will have a non-zero replica count.

Commands like kubectl rollout undo also depend on the history of (zero-replica) ReplicaSets existing. This is discussed further with some examples in the Deployments section of the Kubernetes documentation.

-- David Maze
Source: StackOverflow