Trouble mounting an EBS to a Pod in a Kubernetes cluster

11/25/2018

The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.

sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}

I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-aws
spec:
  volumes:
  - name: mongodb-data
    awsElasticBlockStore:
      volumeID: vol-xxxxxx
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

At first I had this error from the logs of the pod :

mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist

After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :

kubeadm_config.yml file:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "aws"
    cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
  cloud-provider: "aws"
  cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
  hostPath: "/etc/kubernetes/cloud.conf"
  mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
  cloud-provider: "aws"
  cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
  hostPath: "/etc/kubernetes/cloud.conf"
  mountPath: “/etc/kubernetes/cloud.conf"

In /etc/kubernetes/cloud.conf I put :

[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes

After running kubeadm init --config kubeadm_config.yml I had these errors:

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created

When I removed :

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "aws"
    cloud-config: "/etc/kubernetes/cloud.conf"

From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:

NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system   etcd-ip-172-31-31-160                      1/1     Running            0          11m
kube-system   kube-apiserver-ip-172-31-31-160            1/1     Running            0          11m
kube-system   kube-controller-manager-ip-172-31-31-160   0/1     CrashLoopBackOff   6          11m
kube-system   kube-scheduler-ip-172-31-31-160            1/1     Running            0          10m

The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )

When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:

Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433       1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493       1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091       1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605       1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760       1 leaderelection.go:187] attempting to acquire leader lease  kube-system/kube-controller-manager...
I1126 11:27:53.260484       1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493       1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642       1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760       1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\""
  • I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)

If you need more information, please feel free to ask.

-- machine424
amazon-web-services
kubernetes

0 Answers