Cloud Build - "rollout restart" not recognized (unknown command)

11/7/2019

I have a small cloudbuild.yaml file where I build a Docker image, push it to Google container registry (GCR) and then apply the changes to my Kubernetes cluster. It looks like this:

steps:

  - name: 'gcr.io/cloud-builders/docker'
    entrypoint: 'bash'
    args: [
    '-c',
    'docker pull gcr.io/$PROJECT_ID/frontend:latest || exit 0'
    ]

  - name: "gcr.io/cloud-builders/docker"
    args:
      [
        "build",
        "-f",
        "./services/frontend/prod.Dockerfile",
        "-t",
        "gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
        "-t",
        "gcr.io/$PROJECT_ID/frontend:latest",
        ".",
      ]

  - name: "gcr.io/cloud-builders/docker"
    args: ["push", "gcr.io/$PROJECT_ID/frontend"]

  - name: "gcr.io/cloud-builders/kubectl"
    args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
    env:
      - "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
      - "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"

  - name: "gcr.io/cloud-builders/kubectl"
    args: ["rollout", "restart", "deployment/frontend-deployment"]
    env:
      - "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
      - "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"

The build runs smoothly, until the last step. args: ["rollout", "restart", "deployment/frontend-deployment"]. It has the following log output:

Already have image (with digest): gcr.io/cloud-builders/kubectl
Running: gcloud container clusters get-credentials --project="cents-ideas" --zone="europe-west3-a" "cents-ideas"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
Running: kubectl rollout restart deployment/frontend-deployment
error: unknown command "restart deployment/frontend-deployment"
See 'kubectl rollout -h' for help and examples.

Allegedly, restart is an unknown command. But it works when I run kubectl rollout restart deployment/frontend-deployment manually.

How can I fix this problem?

-- Florian Ludewig
google-cloud-build
google-kubernetes-engine
kubernetes

1 Answer

11/10/2019

Looking at the Kubernetes release notes, the kubectl rollout restart commmand was introduced in the v1.15 version. In your case, it seems Cloud Build is using an older version where this command wasn't implemented yet.

After doing some test, it appears Cloud Build uses a kubectl client version depending on the cluster's server version. For example, when running the following build:

steps:
  - name: "gcr.io/cloud-builders/kubectl"
    args: ["version"]
    env:
      - "CLOUDSDK_COMPUTE_ZONE=<cluster_zone>"
      - "CLOUDSDK_CONTAINER_CLUSTER=<cluster_name>"

if the cluster's master version is v1.14, Cloud Build uses a v1.14 kubectl client and returns the same unknown command "restart" error message. When master's version is v1.15, Cloud Build uses a v1.15 kubectl client and the command runs successfully.

So about your case, I suspect your cluster "cents-ideas" master version is <1.15 which would explain the error you're getting. As per why it works when you run the command manually (I understand locally), I suspect your kubectl may be authenticated to another cluster with master version >=1.15.

-- LundinCast
Source: StackOverflow