Kubespray fails with "Found multiple CRI sockets, please use --cri-socket to select one"

9/10/2019

Problem encountered

When deploying a cluster with Kubespray, CRI-O and Cilium I get an error about having multiple CRI socket to choose from.

Full error

fatal: [p3kubemaster1]: FAILED! => {"changed": true, "cmd": " mkdir -p /etc/kubernetes/external_kubeconfig &&  /usr/local/bin/kubeadm  init phase   kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig  --cert-dir /etc/kubernetes/ssl --apiserver-advertise-address 10.10.3.15 --apiserver-bind-port 6443  >/dev/null && cat /etc/kubernetes/external_kubeconfig/admin.conf && rm -rf /etc/kubernetes/external_kubeconfig ", "delta": "0:00:00.028808", "end": "2019-09-02 13:01:11.472480", "msg": "non-zero return code", "rc": 1, "start": "2019-09-02 13:01:11.443672", "stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock", "stderr_lines": ["Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"], "stdout": "", "stdout_lines": []}

Interesting part

kubeadm  init phase kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig [...] >/dev/null,"stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"}

What I've tried

  • 1) I've tried to set the --cri-socket flag inside /var/lib/kubelet/kubeadm-flags.env:
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --cri-socket=/var/run/crio/crio.sock"

\=> Makes no difference

  • 2) I've checked /etc/kubernetes/kubeadm-config.yaml but it already contains the following section :
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.3.15
  bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3xxxxxxxxxxxx
nodeRegistration:
  name: p3kubemaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/crio/crio.sock

\=> Its already ending with the criSocket flag, so nothing to do...

  • 3) Tried to edit the ansible script to add the --cri-socket to the existing command but it fails with Unknow command --cri-socket

Existing :

{% if kubeadm_version is version('v1.14.0', '>=') %}
    init phase`

Tried :

{% if kubeadm_version is version('v1.14.0', '>=') %}
    init phase --crio socket /var/run/crio/crio.sock`

Theories

It seems that the problem comes from the command kubeadm init phase which is not compatible with the --crio-socket flag... (see point 3)

Even though the correct socket is set (see point 2) using the config file, the kubeadm init phase is not using it.

Any ideas would be apreciated ;-)
thx

-- Doctor
cilium
cri-o
kubernetes
kubespray

2 Answers

9/13/2019

I have made some research and came upon this github thread.

Which than pointed me to another one here.

This seems to be a kubeadm issue which was already fixed and so the solution is available in v1.15 Could you please upgrade to that version (I am not sure which one you are using basing on both of your question that I have worked on) and see if the problem still persists?

-- OhHiMark
Source: StackOverflow

9/18/2019

I finally got it !

The initial kubespray command was:
kubeadm init phase kubeconfig admin --kubeconfig-dir {{ kube_config_dir }}/external_kubeconfig

⚠️ It seems that the --kubeconfig-dir flag was not taking into account the number of crio sockets.

So I changed the line to:
kubeadm init phase kubeconfig admin --config /etc/kubernetes/kubeadm-config.yaml


For people having similar issues:

The InitConfig part that made it work on the master is the following:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.3.15
  bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3c1eb2dafe5fbe2595ab4ab2d3eb1a
nodeRegistration:
  name: p3kubemaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/crio/crio.sock

In kubespray you must update the file roles/kubernetes/client/tasks/main.yml arround line 57.

You'll have to comment the initial --kubeconfig-dir section and replace it with the path of the InitConfig file.

For me it was generated by kubespray in /etc/kubernetes/kubeadm-config.yaml on the kube master. Check that this file exists on you side and that it contains the criSocket key in the nodeRegistration section.

-- Doctor
Source: StackOverflow