Kubernetes postStart seems to wreck shop for everything in deployment

6/19/2019

We have the following deployment yaml:

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
  namespace: {{DEP_ENVIRONMENT}}
  labels:
    app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
spec:
  replicas: {{NUM_REPLICAS}}
  selector:
    matchLabels:
      app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
  template:
    metadata:
      labels:
        app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
    spec:
      # [START volumes]
      volumes:
        - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
          secret:
            secretName: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
      # [END volumes]
      containers:
      # [START proxy_container]
      - name: cloudsql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.11
        command: ["/cloud_sql_proxy",
                  "-instances=<PROJECT_ID>:{{CLOUD_DB_CONN_INSTANCE}}=tcp:3306",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        # [START cloudsql_security_context]
        securityContext:
          runAsUser: 2  # non-root user
          allowPrivilegeEscalation: false
        # [END cloudsql_security_context]
        volumeMounts:
          - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
            mountPath: /secrets/cloudsql
            readOnly: true
      # [END proxy_container]
      - name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
        image: {{IMAGE_NAME}}
        ports:
        - containerPort: 80
        env:
        - name: CLOUD_DB_HOST
          value: 127.0.0.1
        - name: DEV_CLOUD_DB_USER
          valueFrom:
            secretKeyRef:
              name: {{CLOUD_DB_DB_CREDENTIALS}}
              key: username
        - name: DEV_CLOUD_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: {{CLOUD_DB_DB_CREDENTIALS}}
              key: password
      # [END cloudsql_secrets]
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "supervisord"]

The last lifecycle block is new and is causing the database connection to be refused. This config works fine without the lifecycle block. I'm sure that there is something stupid here that I am missing but for the life of my cannot figure out what it is.

Note: we are only trying to start Supervisor like this as a workaround for huge issues when attempting to start it normally.

-- lola_the_coding_girl
dockerfile
google-kubernetes-engine
kubernetes
supervisord

1 Answer

6/20/2019

Lifecycle hooks are intended to be short foreground commands. You cannot start a background daemon from them, that has to be the main command for the container.

-- coderanger
Source: StackOverflow