Kubernetes: Failed to get GCE GCECloudProvider with error <nil>

4/30/2018

I have set up a custom kubernetes cluster on GCE using kubeadm. I am trying to use StatefulSets with persistent storage.

I have the following configuration:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gce-slow
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zones: europe-west3-b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myname
  labels:
    app: myapp
spec:
  serviceName: myservice
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: mycontainer
          image: ubuntu:16.04
          env:
          volumeMounts:
          - name: myapp-data
            mountPath: /srv/data
      imagePullSecrets:
      - name: sitesearch-secret
  volumeClaimTemplates:
  - metadata:
      name: myapp-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: gce-slow
      resources:
        requests:
          storage: 1Gi

And I get the following error:

Nopx@vm0:~$ kubectl describe pvc
 Name:          myapp-data-myname-0
 Namespace:     default
 StorageClass:  gce-slow
 Status:        Pending
 Volume:
 Labels:        app=myapp
 Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
 Finalizers:    [kubernetes.io/pvc-protection]
 Capacity:
 Access Modes:
 Events:
   Type     Reason              Age   From                         Message
   ----     ------              ----  ----                         -------
   Warning  ProvisioningFailed  5s    persistentvolume-controller  Failed to provision volume 
 with StorageClass "gce-slow": Failed to get GCE GCECloudProvider with error <nil>

I am treading in the dark and do not know what is missing. It seems logical that it doesn't work, since the provisioner never authenticates to GCE. Any hints and pointers are very much appreciated.

EDIT

I Tried the solution here, by editing the config file in kubeadm with kubeadm config upload from-file, however the error persists. The kubadm config looks like this right now:

api:
  advertiseAddress: 10.156.0.2
  bindPort: 6443
  controlPlaneEndpoint: ""
auditPolicy:
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: gce
criSocket: /var/run/dockershim.sock
etcd:
  caFile: ""
  certFile: ""
  dataDir: /var/lib/etcd
  endpoints: null
  image: ""
  keyFile: ""
imageRepository: k8s.gcr.io
kubeProxy:
  config:
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: 192.168.0.0/16
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    metricsBindAddress: 127.0.0.1:10249
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy
    udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
nodeName: mynode
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""

Edit

The issue was resolved in the comments thanks to Anton Kostenko. The last edit coupled with kubeadm upgrade solves the problem.

-- Nopx
google-compute-engine
kubeadm
kubernetes

2 Answers

1/13/2020

Create dynamic persistent volumes in Kubernetes nodes in the Google cloud virtual machine.

GCP role:

  1. google cloud console go to IAM & Admin.
  2. Add a new service account e.g gce-user.
  3. Add role "compute instance admin".

Add the role to GCP VM:

  1. stop the instance and click edit.
  2. click service account and select new account e.g gce-user.
  3. start the virtual machine.

Add GCE parameter in kubelet in all nodes.

  • add "--cloud-provider=gce"
  • sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

add the value:

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cloud-provider=gce"

  • create new file /etc/kubernetes/cloud-config in all nodes

add this param. [Global] project-id = "xxxxxxxxxxxx"

  • restart kubelet
  • Add gce in controller-master
  • vi /etc/kubernetes/manifests add this params under commands:
  • --cloud-provider=gce

then restart the control plane.

run the ps -ef |grep controller then you must see "gce" in controller output.

Note: Above method is not recommended on the production system, use kubeadm config to update the controller-manager settings.

-- devops_coder
Source: StackOverflow

5/16/2018

The answer took me a while but here it is:

Using the GCECloudProvider in Kubernetes outside of the Google Kubernetes Engine has the following prerequisites (the last point is Kubeadm specific):

  1. The VM needs to be run with a service account that has the right to provision disks. Info on how to run a VM with a service account can be found here

  2. The Kubelet needs to run with the argument --cloud-provider=gce. For this the KUBELET_KUBECONFIG_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf have to be edited. The Kubelet can then be restarted with sudo systemctl restart kubelet

  3. The Kubernetes cloud-config file needs to be configured. The file can be found at /etc/kubernetes/cloud-config and the following content is enough to get the cloud provider to work:

    [Global]
    project-id = "<google-project-id>"
  4. Kubeadm needs to have GCE configured as its cloud provider. The config posted in the question works fine for this. However, the nodeName has to be changed.

-- Nopx
Source: StackOverflow