kubeadm init - fails

4/8/2018

I'm having troubles with this Vagrantfile that I've defined https://github.com/pablotoledo/kubernetes-poc/blob/master/Vagrantfile.

In this Vagrant file I set:

  • 1 Master
  • 2 Workers

And I've defined a few scripts to be runned on each VMs:

  • SSH Keygen for Master -> script_generate_ssh_key
  • SSH Copy Key to copy id_rsa from master to workers -> script_copy_key
  • A script to install common software on each VM -> script_install_common_software this script is based on https://kubernetes.io/docs/setup/independent/install-kubeadm/
  • Another script to setup the Master node role -> script_setup_master < This is the problematic section
  • The last script is used to join the workers with the master -> script_setup_worker

When I've run "vagrant up" the command "sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.40.10" always it's hang.

If I ssh to the master node, I can see that the kube-apiserver container its always being recreated after around 3 minutes.

This is the output of a crashed kube-apiserver instance:

Flag --insecure-port has been deprecated, This flag will be removed in a future version.
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
I0408 12:24:48.977898       1 server.go:135] Version: v1.10.0
I0408 12:24:48.978217       1 server.go:679] external host was not specified, using 10.0.2.15
I0408 12:24:50.350706       1 plugins.go:149] Loaded 9 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota.
I0408 12:24:50.357766       1 master.go:228] Using reconciler: master-count
W0408 12:24:50.456319       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0408 12:24:50.472096       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0408 12:24:50.475201       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0408 12:24:50.489986       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/04/08 12:24:50 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi
[restful] 2018/04/08 12:24:50 log.go:33: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/04/08 12:24:51 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi
[restful] 2018/04/08 12:24:51 log.go:33: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/
I0408 12:24:55.219070       1 serve.go:96] Serving securely on [::]:6443
I0408 12:24:55.219144       1 available_controller.go:262] Starting AvailableConditionController
I0408 12:24:55.219153       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0408 12:24:55.219698       1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0408 12:24:55.219712       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0408 12:24:55.219755       1 crd_finalizer.go:242] Starting CRDFinalizer
I0408 12:24:55.220516       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0408 12:24:55.220529       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0408 12:24:55.220552       1 customresource_discovery_controller.go:174] Starting DiscoveryController
I0408 12:24:55.220571       1 naming_controller.go:276] Starting NamingConditionController
I0408 12:24:55.227100       1 controller.go:84] Starting OpenAPI AggregationController
I0408 12:25:05.259553       1 trace.go:76] Trace[439388531]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:24:55.25803138 +0000 UTC m=+6.462614551) (total time: 10.001458879s):
Trace[439388531]: [10.001458879s] [10.001376262s] END
I0408 12:25:05.475536       1 trace.go:76] Trace[1147394168]: "Create /api/v1/nodes" (started: 2018-04-08 12:24:55.473779876 +0000 UTC m=+6.678363122) (total time: 10.001690768s):
Trace[1147394168]: [10.001690768s] [10.001489339s] END
I0408 12:25:15.264150       1 trace.go:76] Trace[2095398311]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:05.262783532 +0000 UTC m=+16.467366812) (total time: 10.001282694s):
Trace[2095398311]: [10.001282694s] [10.001123521s] END
I0408 12:25:22.617868       1 trace.go:76] Trace[351185622]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:12.612633316 +0000 UTC m=+23.817216837) (total time: 10.005165894s):
Trace[351185622]: [10.005165894s] [10.004689717s] END
I0408 12:25:25.268040       1 trace.go:76] Trace[460596942]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:15.267221356 +0000 UTC m=+26.471804605)
(total time: 10.000777946s):
Trace[460596942]: [10.000777946s] [10.000596999s] END
I0408 12:25:30.744179       1 trace.go:76] Trace[1400508077]: "Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests" (started: 2018-04-08 12:25:20.742377206 +0000 UTC m=+31.946960452) (total time: 10.001739846s):
Trace[1400508077]: [10.001739846s] [10.00156572s] END
I0408 12:25:35.271775       1 trace.go:76] Trace[850178247]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:25.270857617 +0000 UTC m=+36.475440866)
(total time: 10.000858266s):
Trace[850178247]: [10.000858266s] [10.00070839s] END
I0408 12:25:39.786386       1 trace.go:76] Trace[2021645803]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:29.770900237 +0000 UTC m=+40.975483430) (total time: 10.015433752s):
Trace[2021645803]: [10.015433752s] [10.015299731s] END
I0408 12:25:45.285287       1 trace.go:76] Trace[2302986]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:35.276453578 +0000 UTC m=+46.481036913) (total time: 10.008728056s):
Trace[2302986]: [10.008728056s] [10.008596155s] END
E0408 12:25:55.242069       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
E0408 12:25:55.279175       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0408 12:25:55.279561       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0408 12:25:55.280109       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E0408 12:25:55.280477       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0408 12:25:55.280611       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E0408 12:25:55.281036       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.VolumeAttachment: the server was unable to return a response in the time allotted, but may still be processing the request (get volumeattachments.storage.k8s.io)
E0408 12:25:55.282907       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E0408 12:25:55.283131       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ServiceAccount:
the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E0408 12:25:55.283626       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.MutatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0408 12:25:55.284185       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.LimitRange: the
server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E0408 12:25:55.285586       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E0408 12:25:55.286253       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: the server was unable to return
a response in the time allotted, but may still be processing the request (get services)
E0408 12:25:55.286750       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E0408 12:25:55.287667       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E0408 12:25:55.292724       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ValidatingWebhookConfiguration: the
server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0408 12:25:55.293137       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E0408 12:25:55.293191       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
I0408 12:25:55.294035       1 trace.go:76] Trace[448038888]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:45.290257352 +0000 UTC m=+56.494840693)
(total time: 10.00357948s):
Trace[448038888]: [10.00357948s] [10.003312246s] END
E0408 12:25:55.294860       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Pod: the server
was unable to return a response in the time allotted, but may still be processing the request (get pods)
E0408 12:25:56.224200       1 storage_rbac.go:157] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I0408 12:25:56.801282       1 trace.go:76] Trace[703945258]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:46.799631549 +0000 UTC m=+58.004214890) (total time: 10.001618084s):
Trace[703945258]: [10.001618084s] [10.001087054s] END
I0408 12:26:02.808827       1 trace.go:76] Trace[1631269070]: "Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests" (started: 2018-04-08 12:25:52.784610063 +0000 UTC m=+63.989193403) (total time: 10.024138244s):
Trace[1631269070]: [10.024138244s] [10.023949067s] END
I0408 12:26:05.300199       1 trace.go:76] Trace[494561622]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:55.29934912 +0000 UTC m=+66.503932586) (total time: 10.00079884s):
Trace[494561622]: [10.00079884s] [10.000554488s] END
I0408 12:26:06.234261       1 trace.go:76] Trace[108596673]: "Create /api/v1/namespaces" (started: 2018-04-08 12:25:56.225584357 +0000 UTC m=+67.430167698) (total time: 10.008614333s):
Trace[108596673]: [10.008614333s] [10.00842738s] END
E0408 12:26:06.236146       1 client_ca_hook.go:78] namespaces "kube-system" is forbidden: not yet ready to handle request
E0408 12:26:07.582170       1 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller
I0408 12:26:07.582234       1 apiservice_controller.go:94] Shutting down APIServiceRegistrationController
E0408 12:26:07.582293       1 cache.go:35] Unable to sync caches for AvailableConditionController controller
E0408 12:26:07.582358       1 controller_utils.go:1022] Unable to sync caches for crd-autoregister controller
E0408 12:26:07.582384       1 customresource_discovery_controller.go:177] timed out waiting for caches to sync
I0408 12:26:07.582408       1 naming_controller.go:280] Shutting down NamingConditionController
I0408 12:26:07.582438       1 crd_finalizer.go:246] Shutting down CRDFinalizer
I0408 12:26:07.582177       1 controller.go:90] Shutting down OpenAPI AggregationController
I0408 12:26:07.582559       1 serve.go:136] Stopped listening on [::]:6443
I0408 12:26:07.584582       1 available_controller.go:266] Shutting down AvailableConditionController
I0408 12:26:07.585842       1 crdregistration_controller.go:115] Shutting down crd-autoregister controller
I0408 12:26:07.587352       1 customresource_discovery_controller.go:178] Shutting down DiscoveryController
I0408 12:26:13.822786       1 trace.go:76] Trace[1412481370]: "Create /api/v1/nodes" (started: 2018-04-08 12:26:03.817519799 +0000 UTC m=+75.022103139) (total time: 10.005184469s):
Trace[1412481370]: [10.005184469s] [10.004863636s] END
I0408 12:26:15.304918       1 trace.go:76] Trace[38092900]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:26:05.303564401 +0000 UTC m=+76.508147685) (total time: 10.001274076s):
Trace[38092900]: [10.001274076s] [10.001122791s] END

Could anyone help me, please?

-- Pablo Toledo
kubeadm
kubernetes
vagrantfile

1 Answer

4/12/2018

Usually, when you use kubeadm to create kubernetes cluster, you follow typical sequence:

  1. prepare VM (configure CPU, ram, network, drives, vagrant boxes, etc.)
  2. add gpg-keys and repositories
  3. configure sysctl (bridge-nf-call-ip6tables, bridge-nf-call-iptables)
  4. install packages, depending on your system
    • ubuntu (ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl)
    • centos (go git wget docker kubelet kubectl kubeadm) (crictl on master)
  5. run kubeadm init
  6. configure kubectl (create ~/.kube/config)
  7. configure kubernetes network subsystem (calico, weave, etc.)
  8. join workers to cluster

At this point, you usually have a ready-to-use kubernetes cluster.

After checking your Vagrant file, I would suggest you to:

  1. change kubernetes baseurl in repo config step:

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\\$basearch

  1. move these lines to $script_install_common_software,

sudo bash -c "echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf"
sudo bash -c "echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf"
sudo sysctl --system

  1. copy binary to /usr/bin after installation of crictl (you need this binary only on master)

sudo cp ~/go/bin/crictl /usr/bin

  1. put these lines before deploying calico with kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. uncomment worker join commands

This should be enough to bring your cluster to "Ready" state.

-- VAS
Source: StackOverflow